Test Report: KVM_Linux_crio 20045

                    
                      70ee1ceb4b2f7849aa4717a6092bbfa282d9029b:2024-12-05:37344
                    
                

Test fail (12/213)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-633765 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-633765 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: signal: killed (39m59.949509638s)

                                                
                                                
-- stdout --
	* [addons-633765] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-633765" primary control-plane node in "addons-633765" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/registry:2.8.3
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image docker.io/busybox:stable
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying ingress addon...
	* Verifying registry addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-633765 service yakd-dashboard -n yakd-dashboard
	
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-633765 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner-rancher, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:11:54.864038   22143 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:11:54.864536   22143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:11:54.864557   22143 out.go:358] Setting ErrFile to fd 2...
	I1204 23:11:54.864565   22143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:11:54.865007   22143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1204 23:11:54.866002   22143 out.go:352] Setting JSON to false
	I1204 23:11:54.866786   22143 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3261,"bootTime":1733350654,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:11:54.866881   22143 start.go:139] virtualization: kvm guest
	I1204 23:11:54.868911   22143 out.go:177] * [addons-633765] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:11:54.870838   22143 notify.go:220] Checking for updates...
	I1204 23:11:54.870853   22143 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:11:54.872192   22143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:11:54.873700   22143 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:11:54.875227   22143 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:11:54.876720   22143 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:11:54.878122   22143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:11:54.879493   22143 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:11:54.912052   22143 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 23:11:54.913505   22143 start.go:297] selected driver: kvm2
	I1204 23:11:54.913574   22143 start.go:901] validating driver "kvm2" against <nil>
	I1204 23:11:54.913616   22143 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:11:54.914682   22143 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:54.914785   22143 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 23:11:54.930843   22143 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 23:11:54.930898   22143 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:11:54.931165   22143 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:11:54.931195   22143 cni.go:84] Creating CNI manager for ""
	I1204 23:11:54.931254   22143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 23:11:54.931268   22143 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 23:11:54.931325   22143 start.go:340] cluster config:
	{Name:addons-633765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-633765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:11:54.931434   22143 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:54.933394   22143 out.go:177] * Starting "addons-633765" primary control-plane node in "addons-633765" cluster
	I1204 23:11:54.934732   22143 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:54.934770   22143 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:11:54.934783   22143 cache.go:56] Caching tarball of preloaded images
	I1204 23:11:54.934889   22143 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:11:54.934906   22143 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:11:54.935331   22143 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/config.json ...
	I1204 23:11:54.935357   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/config.json: {Name:mkc9d414dd7baebab2e846abbba1797c3082bc38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:54.935498   22143 start.go:360] acquireMachinesLock for addons-633765: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:11:54.935546   22143 start.go:364] duration metric: took 32.331µs to acquireMachinesLock for "addons-633765"
	I1204 23:11:54.935563   22143 start.go:93] Provisioning new machine with config: &{Name:addons-633765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-633765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:54.935620   22143 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 23:11:54.939019   22143 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1204 23:11:54.939173   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:11:54.939221   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:11:54.954089   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I1204 23:11:54.954680   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:11:54.955359   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:11:54.955384   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:11:54.955879   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:11:54.956102   22143 main.go:141] libmachine: (addons-633765) Calling .GetMachineName
	I1204 23:11:54.956317   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:11:54.956484   22143 start.go:159] libmachine.API.Create for "addons-633765" (driver="kvm2")
	I1204 23:11:54.956514   22143 client.go:168] LocalClient.Create starting
	I1204 23:11:54.956555   22143 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:11:55.163698   22143 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:11:55.235985   22143 main.go:141] libmachine: Running pre-create checks...
	I1204 23:11:55.236006   22143 main.go:141] libmachine: (addons-633765) Calling .PreCreateCheck
	I1204 23:11:55.236467   22143 main.go:141] libmachine: (addons-633765) Calling .GetConfigRaw
	I1204 23:11:55.236890   22143 main.go:141] libmachine: Creating machine...
	I1204 23:11:55.236904   22143 main.go:141] libmachine: (addons-633765) Calling .Create
	I1204 23:11:55.237079   22143 main.go:141] libmachine: (addons-633765) Creating KVM machine...
	I1204 23:11:55.238330   22143 main.go:141] libmachine: (addons-633765) DBG | found existing default KVM network
	I1204 23:11:55.239228   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:55.239060   22189 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011b1f0}
	I1204 23:11:55.239255   22143 main.go:141] libmachine: (addons-633765) DBG | created network xml: 
	I1204 23:11:55.239268   22143 main.go:141] libmachine: (addons-633765) DBG | <network>
	I1204 23:11:55.239277   22143 main.go:141] libmachine: (addons-633765) DBG |   <name>mk-addons-633765</name>
	I1204 23:11:55.239286   22143 main.go:141] libmachine: (addons-633765) DBG |   <dns enable='no'/>
	I1204 23:11:55.239296   22143 main.go:141] libmachine: (addons-633765) DBG |   
	I1204 23:11:55.239309   22143 main.go:141] libmachine: (addons-633765) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 23:11:55.239319   22143 main.go:141] libmachine: (addons-633765) DBG |     <dhcp>
	I1204 23:11:55.239328   22143 main.go:141] libmachine: (addons-633765) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 23:11:55.239344   22143 main.go:141] libmachine: (addons-633765) DBG |     </dhcp>
	I1204 23:11:55.239354   22143 main.go:141] libmachine: (addons-633765) DBG |   </ip>
	I1204 23:11:55.239360   22143 main.go:141] libmachine: (addons-633765) DBG |   
	I1204 23:11:55.239370   22143 main.go:141] libmachine: (addons-633765) DBG | </network>
	I1204 23:11:55.239380   22143 main.go:141] libmachine: (addons-633765) DBG | 
	I1204 23:11:55.245577   22143 main.go:141] libmachine: (addons-633765) DBG | trying to create private KVM network mk-addons-633765 192.168.39.0/24...
	I1204 23:11:55.309521   22143 main.go:141] libmachine: (addons-633765) DBG | private KVM network mk-addons-633765 192.168.39.0/24 created
	I1204 23:11:55.309577   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:55.309447   22189 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:11:55.309589   22143 main.go:141] libmachine: (addons-633765) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765 ...
	I1204 23:11:55.309610   22143 main.go:141] libmachine: (addons-633765) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:11:55.309633   22143 main.go:141] libmachine: (addons-633765) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:11:55.581196   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:55.580978   22189 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa...
	I1204 23:11:55.876312   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:55.876108   22189 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/addons-633765.rawdisk...
	I1204 23:11:55.876358   22143 main.go:141] libmachine: (addons-633765) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765 (perms=drwx------)
	I1204 23:11:55.876373   22143 main.go:141] libmachine: (addons-633765) DBG | Writing magic tar header
	I1204 23:11:55.876387   22143 main.go:141] libmachine: (addons-633765) DBG | Writing SSH key tar header
	I1204 23:11:55.876395   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:55.876219   22189 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765 ...
	I1204 23:11:55.876413   22143 main.go:141] libmachine: (addons-633765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765
	I1204 23:11:55.876432   22143 main.go:141] libmachine: (addons-633765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:11:55.876448   22143 main.go:141] libmachine: (addons-633765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:11:55.876458   22143 main.go:141] libmachine: (addons-633765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:11:55.876464   22143 main.go:141] libmachine: (addons-633765) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:11:55.876471   22143 main.go:141] libmachine: (addons-633765) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:11:55.876476   22143 main.go:141] libmachine: (addons-633765) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:11:55.876482   22143 main.go:141] libmachine: (addons-633765) DBG | Checking permissions on dir: /home
	I1204 23:11:55.876486   22143 main.go:141] libmachine: (addons-633765) DBG | Skipping /home - not owner
	I1204 23:11:55.876496   22143 main.go:141] libmachine: (addons-633765) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:11:55.876509   22143 main.go:141] libmachine: (addons-633765) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:11:55.876521   22143 main.go:141] libmachine: (addons-633765) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:11:55.876534   22143 main.go:141] libmachine: (addons-633765) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:11:55.876568   22143 main.go:141] libmachine: (addons-633765) Creating domain...
	I1204 23:11:55.877423   22143 main.go:141] libmachine: (addons-633765) define libvirt domain using xml: 
	I1204 23:11:55.877444   22143 main.go:141] libmachine: (addons-633765) <domain type='kvm'>
	I1204 23:11:55.877454   22143 main.go:141] libmachine: (addons-633765)   <name>addons-633765</name>
	I1204 23:11:55.877461   22143 main.go:141] libmachine: (addons-633765)   <memory unit='MiB'>4000</memory>
	I1204 23:11:55.877469   22143 main.go:141] libmachine: (addons-633765)   <vcpu>2</vcpu>
	I1204 23:11:55.877483   22143 main.go:141] libmachine: (addons-633765)   <features>
	I1204 23:11:55.877495   22143 main.go:141] libmachine: (addons-633765)     <acpi/>
	I1204 23:11:55.877507   22143 main.go:141] libmachine: (addons-633765)     <apic/>
	I1204 23:11:55.877540   22143 main.go:141] libmachine: (addons-633765)     <pae/>
	I1204 23:11:55.877564   22143 main.go:141] libmachine: (addons-633765)     
	I1204 23:11:55.877573   22143 main.go:141] libmachine: (addons-633765)   </features>
	I1204 23:11:55.877578   22143 main.go:141] libmachine: (addons-633765)   <cpu mode='host-passthrough'>
	I1204 23:11:55.877585   22143 main.go:141] libmachine: (addons-633765)   
	I1204 23:11:55.877591   22143 main.go:141] libmachine: (addons-633765)   </cpu>
	I1204 23:11:55.877609   22143 main.go:141] libmachine: (addons-633765)   <os>
	I1204 23:11:55.877617   22143 main.go:141] libmachine: (addons-633765)     <type>hvm</type>
	I1204 23:11:55.877622   22143 main.go:141] libmachine: (addons-633765)     <boot dev='cdrom'/>
	I1204 23:11:55.877630   22143 main.go:141] libmachine: (addons-633765)     <boot dev='hd'/>
	I1204 23:11:55.877636   22143 main.go:141] libmachine: (addons-633765)     <bootmenu enable='no'/>
	I1204 23:11:55.877645   22143 main.go:141] libmachine: (addons-633765)   </os>
	I1204 23:11:55.877650   22143 main.go:141] libmachine: (addons-633765)   <devices>
	I1204 23:11:55.877657   22143 main.go:141] libmachine: (addons-633765)     <disk type='file' device='cdrom'>
	I1204 23:11:55.877665   22143 main.go:141] libmachine: (addons-633765)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/boot2docker.iso'/>
	I1204 23:11:55.877673   22143 main.go:141] libmachine: (addons-633765)       <target dev='hdc' bus='scsi'/>
	I1204 23:11:55.877678   22143 main.go:141] libmachine: (addons-633765)       <readonly/>
	I1204 23:11:55.877682   22143 main.go:141] libmachine: (addons-633765)     </disk>
	I1204 23:11:55.877688   22143 main.go:141] libmachine: (addons-633765)     <disk type='file' device='disk'>
	I1204 23:11:55.877696   22143 main.go:141] libmachine: (addons-633765)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:11:55.877704   22143 main.go:141] libmachine: (addons-633765)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/addons-633765.rawdisk'/>
	I1204 23:11:55.877711   22143 main.go:141] libmachine: (addons-633765)       <target dev='hda' bus='virtio'/>
	I1204 23:11:55.877716   22143 main.go:141] libmachine: (addons-633765)     </disk>
	I1204 23:11:55.877727   22143 main.go:141] libmachine: (addons-633765)     <interface type='network'>
	I1204 23:11:55.877735   22143 main.go:141] libmachine: (addons-633765)       <source network='mk-addons-633765'/>
	I1204 23:11:55.877740   22143 main.go:141] libmachine: (addons-633765)       <model type='virtio'/>
	I1204 23:11:55.877747   22143 main.go:141] libmachine: (addons-633765)     </interface>
	I1204 23:11:55.877751   22143 main.go:141] libmachine: (addons-633765)     <interface type='network'>
	I1204 23:11:55.877757   22143 main.go:141] libmachine: (addons-633765)       <source network='default'/>
	I1204 23:11:55.877764   22143 main.go:141] libmachine: (addons-633765)       <model type='virtio'/>
	I1204 23:11:55.877768   22143 main.go:141] libmachine: (addons-633765)     </interface>
	I1204 23:11:55.877772   22143 main.go:141] libmachine: (addons-633765)     <serial type='pty'>
	I1204 23:11:55.877777   22143 main.go:141] libmachine: (addons-633765)       <target port='0'/>
	I1204 23:11:55.877793   22143 main.go:141] libmachine: (addons-633765)     </serial>
	I1204 23:11:55.877801   22143 main.go:141] libmachine: (addons-633765)     <console type='pty'>
	I1204 23:11:55.877806   22143 main.go:141] libmachine: (addons-633765)       <target type='serial' port='0'/>
	I1204 23:11:55.877811   22143 main.go:141] libmachine: (addons-633765)     </console>
	I1204 23:11:55.877815   22143 main.go:141] libmachine: (addons-633765)     <rng model='virtio'>
	I1204 23:11:55.877821   22143 main.go:141] libmachine: (addons-633765)       <backend model='random'>/dev/random</backend>
	I1204 23:11:55.877831   22143 main.go:141] libmachine: (addons-633765)     </rng>
	I1204 23:11:55.877840   22143 main.go:141] libmachine: (addons-633765)     
	I1204 23:11:55.877848   22143 main.go:141] libmachine: (addons-633765)     
	I1204 23:11:55.877861   22143 main.go:141] libmachine: (addons-633765)   </devices>
	I1204 23:11:55.877873   22143 main.go:141] libmachine: (addons-633765) </domain>
	I1204 23:11:55.877878   22143 main.go:141] libmachine: (addons-633765) 
	I1204 23:11:55.884563   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:d5:bc:c8 in network default
	I1204 23:11:55.885053   22143 main.go:141] libmachine: (addons-633765) Ensuring networks are active...
	I1204 23:11:55.885079   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:11:55.885684   22143 main.go:141] libmachine: (addons-633765) Ensuring network default is active
	I1204 23:11:55.886040   22143 main.go:141] libmachine: (addons-633765) Ensuring network mk-addons-633765 is active
	I1204 23:11:55.886558   22143 main.go:141] libmachine: (addons-633765) Getting domain xml...
	I1204 23:11:55.887222   22143 main.go:141] libmachine: (addons-633765) Creating domain...
	I1204 23:11:57.309668   22143 main.go:141] libmachine: (addons-633765) Waiting to get IP...
	I1204 23:11:57.310604   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:11:57.311086   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:11:57.311115   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:57.311043   22189 retry.go:31] will retry after 190.690419ms: waiting for machine to come up
	I1204 23:11:57.503732   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:11:57.504164   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:11:57.504192   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:57.504113   22189 retry.go:31] will retry after 335.766313ms: waiting for machine to come up
	I1204 23:11:57.841924   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:11:57.842698   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:11:57.842728   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:57.842644   22189 retry.go:31] will retry after 325.812914ms: waiting for machine to come up
	I1204 23:11:58.169726   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:11:58.170203   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:11:58.170231   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:58.170157   22189 retry.go:31] will retry after 452.330793ms: waiting for machine to come up
	I1204 23:11:58.623794   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:11:58.624262   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:11:58.624291   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:58.624215   22189 retry.go:31] will retry after 505.527712ms: waiting for machine to come up
	I1204 23:11:59.130910   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:11:59.131336   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:11:59.131365   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:59.131285   22189 retry.go:31] will retry after 841.290875ms: waiting for machine to come up
	I1204 23:11:59.973916   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:11:59.974358   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:11:59.974380   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:11:59.974305   22189 retry.go:31] will retry after 991.341734ms: waiting for machine to come up
	I1204 23:12:00.967345   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:00.967694   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:12:00.967741   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:12:00.967642   22189 retry.go:31] will retry after 1.336379705s: waiting for machine to come up
	I1204 23:12:02.306275   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:02.306840   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:12:02.306865   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:12:02.306798   22189 retry.go:31] will retry after 1.766631454s: waiting for machine to come up
	I1204 23:12:04.075752   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:04.076081   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:12:04.076107   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:12:04.076047   22189 retry.go:31] will retry after 1.453032769s: waiting for machine to come up
	I1204 23:12:05.530334   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:05.530813   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:12:05.530840   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:12:05.530760   22189 retry.go:31] will retry after 2.3407218s: waiting for machine to come up
	I1204 23:12:07.874189   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:07.874674   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:12:07.874704   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:12:07.874620   22189 retry.go:31] will retry after 3.363741211s: waiting for machine to come up
	I1204 23:12:11.240510   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:11.240853   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find current IP address of domain addons-633765 in network mk-addons-633765
	I1204 23:12:11.240877   22143 main.go:141] libmachine: (addons-633765) DBG | I1204 23:12:11.240819   22189 retry.go:31] will retry after 4.268292782s: waiting for machine to come up
	I1204 23:12:15.511204   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.511598   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has current primary IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.511621   22143 main.go:141] libmachine: (addons-633765) Found IP for machine: 192.168.39.220
	I1204 23:12:15.511634   22143 main.go:141] libmachine: (addons-633765) Reserving static IP address...
	I1204 23:12:15.512104   22143 main.go:141] libmachine: (addons-633765) DBG | unable to find host DHCP lease matching {name: "addons-633765", mac: "52:54:00:5d:d1:10", ip: "192.168.39.220"} in network mk-addons-633765
	I1204 23:12:15.584707   22143 main.go:141] libmachine: (addons-633765) Reserved static IP address: 192.168.39.220
	I1204 23:12:15.584779   22143 main.go:141] libmachine: (addons-633765) DBG | Getting to WaitForSSH function...
	I1204 23:12:15.584812   22143 main.go:141] libmachine: (addons-633765) Waiting for SSH to be available...
	I1204 23:12:15.587454   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.587829   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:15.587872   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.588030   22143 main.go:141] libmachine: (addons-633765) DBG | Using SSH client type: external
	I1204 23:12:15.588059   22143 main.go:141] libmachine: (addons-633765) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa (-rw-------)
	I1204 23:12:15.588107   22143 main.go:141] libmachine: (addons-633765) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:12:15.588131   22143 main.go:141] libmachine: (addons-633765) DBG | About to run SSH command:
	I1204 23:12:15.588145   22143 main.go:141] libmachine: (addons-633765) DBG | exit 0
	I1204 23:12:15.717764   22143 main.go:141] libmachine: (addons-633765) DBG | SSH cmd err, output: <nil>: 
	I1204 23:12:15.718043   22143 main.go:141] libmachine: (addons-633765) KVM machine creation complete!
	I1204 23:12:15.718392   22143 main.go:141] libmachine: (addons-633765) Calling .GetConfigRaw
	I1204 23:12:15.718922   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:15.719095   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:15.719223   22143 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:12:15.719237   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:15.720584   22143 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:12:15.720600   22143 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:12:15.720607   22143 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:12:15.720613   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:15.722605   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.722939   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:15.722968   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.723038   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:15.723214   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:15.723407   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:15.723547   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:15.723709   22143 main.go:141] libmachine: Using SSH client type: native
	I1204 23:12:15.723927   22143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1204 23:12:15.723944   22143 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:12:15.821247   22143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:12:15.821274   22143 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:12:15.821282   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:15.824165   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.824481   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:15.824505   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.824626   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:15.824800   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:15.824944   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:15.825099   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:15.825260   22143 main.go:141] libmachine: Using SSH client type: native
	I1204 23:12:15.825417   22143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1204 23:12:15.825428   22143 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:12:15.926222   22143 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:12:15.926298   22143 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:12:15.926307   22143 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:12:15.926314   22143 main.go:141] libmachine: (addons-633765) Calling .GetMachineName
	I1204 23:12:15.926552   22143 buildroot.go:166] provisioning hostname "addons-633765"
	I1204 23:12:15.926576   22143 main.go:141] libmachine: (addons-633765) Calling .GetMachineName
	I1204 23:12:15.926725   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:15.929066   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.929451   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:15.929477   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:15.929605   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:15.929848   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:15.930016   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:15.930121   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:15.930246   22143 main.go:141] libmachine: Using SSH client type: native
	I1204 23:12:15.930409   22143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1204 23:12:15.930421   22143 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-633765 && echo "addons-633765" | sudo tee /etc/hostname
	I1204 23:12:16.043597   22143 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-633765
	
	I1204 23:12:16.043633   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:16.046308   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.046647   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.046690   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.046829   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:16.047028   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.047133   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.047220   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:16.047316   22143 main.go:141] libmachine: Using SSH client type: native
	I1204 23:12:16.047482   22143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1204 23:12:16.047496   22143 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-633765' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-633765/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-633765' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:12:16.154454   22143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:12:16.154484   22143 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:12:16.154501   22143 buildroot.go:174] setting up certificates
	I1204 23:12:16.154511   22143 provision.go:84] configureAuth start
	I1204 23:12:16.154519   22143 main.go:141] libmachine: (addons-633765) Calling .GetMachineName
	I1204 23:12:16.154828   22143 main.go:141] libmachine: (addons-633765) Calling .GetIP
	I1204 23:12:16.157585   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.157888   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.157917   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.158072   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:16.160403   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.160717   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.160765   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.160860   22143 provision.go:143] copyHostCerts
	I1204 23:12:16.160940   22143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:12:16.161063   22143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:12:16.161136   22143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:12:16.161201   22143 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.addons-633765 san=[127.0.0.1 192.168.39.220 addons-633765 localhost minikube]
	I1204 23:12:16.391330   22143 provision.go:177] copyRemoteCerts
	I1204 23:12:16.391390   22143 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:12:16.391411   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:16.393707   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.394080   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.394109   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.394383   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:16.394575   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.394792   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:16.394956   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:16.471534   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:12:16.493981   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:12:16.515584   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:12:16.537310   22143 provision.go:87] duration metric: took 382.786303ms to configureAuth
	I1204 23:12:16.537339   22143 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:12:16.537548   22143 config.go:182] Loaded profile config "addons-633765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:12:16.537635   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:16.540380   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.540793   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.540823   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.540956   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:16.541173   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.541366   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.541511   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:16.541706   22143 main.go:141] libmachine: Using SSH client type: native
	I1204 23:12:16.541864   22143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1204 23:12:16.541879   22143 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:12:16.751676   22143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:12:16.751709   22143 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:12:16.751719   22143 main.go:141] libmachine: (addons-633765) Calling .GetURL
	I1204 23:12:16.752990   22143 main.go:141] libmachine: (addons-633765) DBG | Using libvirt version 6000000
	I1204 23:12:16.754847   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.755135   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.755166   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.755275   22143 main.go:141] libmachine: Docker is up and running!
	I1204 23:12:16.755292   22143 main.go:141] libmachine: Reticulating splines...
	I1204 23:12:16.755300   22143 client.go:171] duration metric: took 21.798774525s to LocalClient.Create
	I1204 23:12:16.755326   22143 start.go:167] duration metric: took 21.798855237s to libmachine.API.Create "addons-633765"
	I1204 23:12:16.755340   22143 start.go:293] postStartSetup for "addons-633765" (driver="kvm2")
	I1204 23:12:16.755353   22143 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:12:16.755373   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:16.755626   22143 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:12:16.755649   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:16.757560   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.757831   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.757870   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.757999   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:16.758174   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.758303   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:16.758447   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:16.835753   22143 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:12:16.839766   22143 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:12:16.839791   22143 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:12:16.839857   22143 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:12:16.839880   22143 start.go:296] duration metric: took 84.533489ms for postStartSetup
	I1204 23:12:16.839911   22143 main.go:141] libmachine: (addons-633765) Calling .GetConfigRaw
	I1204 23:12:16.840524   22143 main.go:141] libmachine: (addons-633765) Calling .GetIP
	I1204 23:12:16.842889   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.843311   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.843338   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.843569   22143 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/config.json ...
	I1204 23:12:16.843740   22143 start.go:128] duration metric: took 21.908111138s to createHost
	I1204 23:12:16.843761   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:16.845679   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.845928   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.845969   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.846078   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:16.846227   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.846355   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.846494   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:16.846663   22143 main.go:141] libmachine: Using SSH client type: native
	I1204 23:12:16.846921   22143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1204 23:12:16.846937   22143 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:12:16.946352   22143 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733353936.917274850
	
	I1204 23:12:16.946377   22143 fix.go:216] guest clock: 1733353936.917274850
	I1204 23:12:16.946387   22143 fix.go:229] Guest: 2024-12-04 23:12:16.91727485 +0000 UTC Remote: 2024-12-04 23:12:16.843752003 +0000 UTC m=+22.023993472 (delta=73.522847ms)
	I1204 23:12:16.946425   22143 fix.go:200] guest clock delta is within tolerance: 73.522847ms
	I1204 23:12:16.946430   22143 start.go:83] releasing machines lock for "addons-633765", held for 22.010875212s
	I1204 23:12:16.946450   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:16.946772   22143 main.go:141] libmachine: (addons-633765) Calling .GetIP
	I1204 23:12:16.949095   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.949402   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.949425   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.949568   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:16.950030   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:16.950180   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:16.950250   22143 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:12:16.950298   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:16.950401   22143 ssh_runner.go:195] Run: cat /version.json
	I1204 23:12:16.950425   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:16.952818   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.953064   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.953100   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.953119   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.953280   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:16.953442   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.953528   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:16.953553   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:16.953607   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:16.953716   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:16.953784   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:16.953849   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:16.953974   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:16.954111   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:17.026604   22143 ssh_runner.go:195] Run: systemctl --version
	I1204 23:12:17.067457   22143 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:12:17.224234   22143 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:12:17.229672   22143 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:12:17.229741   22143 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:12:17.244220   22143 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:12:17.244247   22143 start.go:495] detecting cgroup driver to use...
	I1204 23:12:17.244318   22143 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:12:17.259626   22143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:12:17.272462   22143 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:12:17.272511   22143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:12:17.285345   22143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:12:17.297963   22143 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:12:17.413314   22143 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:12:17.548195   22143 docker.go:233] disabling docker service ...
	I1204 23:12:17.548268   22143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:12:17.568601   22143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:12:17.581323   22143 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:12:17.713262   22143 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:12:17.828812   22143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:12:17.841649   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:12:17.858549   22143 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:12:17.858614   22143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:12:17.868270   22143 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:12:17.868349   22143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:12:17.877643   22143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:12:17.886997   22143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:12:17.896227   22143 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:12:17.905460   22143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:12:17.914490   22143 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:12:17.929449   22143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:12:17.938486   22143 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:12:17.946850   22143 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:12:17.946893   22143 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:12:17.958968   22143 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:12:17.967399   22143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:12:18.081702   22143 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:12:18.170653   22143 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:12:18.170746   22143 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:12:18.175034   22143 start.go:563] Will wait 60s for crictl version
	I1204 23:12:18.175093   22143 ssh_runner.go:195] Run: which crictl
	I1204 23:12:18.178449   22143 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:12:18.216869   22143 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:12:18.216990   22143 ssh_runner.go:195] Run: crio --version
	I1204 23:12:18.243428   22143 ssh_runner.go:195] Run: crio --version
	I1204 23:12:18.271501   22143 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:12:18.272681   22143 main.go:141] libmachine: (addons-633765) Calling .GetIP
	I1204 23:12:18.275235   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:18.275549   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:18.275579   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:18.275747   22143 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:12:18.279449   22143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:12:18.290684   22143 kubeadm.go:883] updating cluster {Name:addons-633765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-633765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:12:18.290786   22143 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:12:18.290832   22143 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:12:18.320990   22143 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 23:12:18.321047   22143 ssh_runner.go:195] Run: which lz4
	I1204 23:12:18.324608   22143 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 23:12:18.328419   22143 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 23:12:18.328446   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 23:12:19.512759   22143 crio.go:462] duration metric: took 1.188171578s to copy over tarball
	I1204 23:12:19.512845   22143 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 23:12:21.598554   22143 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08567371s)
	I1204 23:12:21.598586   22143 crio.go:469] duration metric: took 2.085793519s to extract the tarball
	I1204 23:12:21.598594   22143 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 23:12:21.633970   22143 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:12:21.672016   22143 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:12:21.672042   22143 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:12:21.672051   22143 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.31.2 crio true true} ...
	I1204 23:12:21.672165   22143 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-633765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-633765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:12:21.672246   22143 ssh_runner.go:195] Run: crio config
	I1204 23:12:21.719603   22143 cni.go:84] Creating CNI manager for ""
	I1204 23:12:21.719629   22143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 23:12:21.719641   22143 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:12:21.719668   22143 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-633765 NodeName:addons-633765 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:12:21.719786   22143 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-633765"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:12:21.719840   22143 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:12:21.729037   22143 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:12:21.729114   22143 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 23:12:21.737848   22143 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 23:12:21.752705   22143 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:12:21.767693   22143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1204 23:12:21.782543   22143 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I1204 23:12:21.785922   22143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:12:21.796641   22143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:12:21.914312   22143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:12:21.929345   22143 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765 for IP: 192.168.39.220
	I1204 23:12:21.929372   22143 certs.go:194] generating shared ca certs ...
	I1204 23:12:21.929392   22143 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:21.929535   22143 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:12:22.180293   22143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt ...
	I1204 23:12:22.180326   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt: {Name:mk4d074d262dfcc2acdf41c75073a55bd1bb4cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.180522   22143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key ...
	I1204 23:12:22.180537   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key: {Name:mk6c15bde0f883df641f1e0d956e784c9f313960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.180633   22143 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:12:22.309391   22143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt ...
	I1204 23:12:22.309419   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt: {Name:mk2744b2fbfcb733f2a1c46704d08628fa2f16d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.309588   22143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key ...
	I1204 23:12:22.309602   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key: {Name:mk70ab7c64f0acfb4c83165478fea1f78ab3a526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.309691   22143 certs.go:256] generating profile certs ...
	I1204 23:12:22.309774   22143 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/client.key
	I1204 23:12:22.309802   22143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/client.crt with IP's: []
	I1204 23:12:22.516427   22143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/client.crt ...
	I1204 23:12:22.516461   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/client.crt: {Name:mk8b3754937aefed090734a6b04029c556ddcba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.516641   22143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/client.key ...
	I1204 23:12:22.516656   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/client.key: {Name:mk35641655badf98c514b77870b0c4c20b5b99bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.516750   22143 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.key.ed012301
	I1204 23:12:22.516772   22143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.crt.ed012301 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I1204 23:12:22.703641   22143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.crt.ed012301 ...
	I1204 23:12:22.703671   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.crt.ed012301: {Name:mkf5a96b5142de425db3d40dc8a5ddcf4930ee0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.703831   22143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.key.ed012301 ...
	I1204 23:12:22.703843   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.key.ed012301: {Name:mk926c3ff049249d28f9aeb8273fc0351b6e72d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.703914   22143 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.crt.ed012301 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.crt
	I1204 23:12:22.703998   22143 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.key.ed012301 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.key
	I1204 23:12:22.704047   22143 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/proxy-client.key
	I1204 23:12:22.704063   22143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/proxy-client.crt with IP's: []
	I1204 23:12:22.756307   22143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/proxy-client.crt ...
	I1204 23:12:22.756333   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/proxy-client.crt: {Name:mk75d9770cc316ef4bc15713e80b1c249943b983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.756474   22143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/proxy-client.key ...
	I1204 23:12:22.756484   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/proxy-client.key: {Name:mkea0e4d59188c6492f0ea5ba05dc65a3b94f417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:22.756632   22143 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:12:22.756666   22143 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:12:22.756693   22143 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:12:22.756716   22143 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:12:22.757273   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:12:22.782656   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:12:22.803547   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:12:22.838913   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:12:22.859908   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 23:12:22.881228   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:12:22.902385   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:12:22.923434   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/addons-633765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 23:12:22.944322   22143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:12:22.965210   22143 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:12:22.979952   22143 ssh_runner.go:195] Run: openssl version
	I1204 23:12:22.985102   22143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:12:22.994543   22143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:12:22.998590   22143 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:12:22.998634   22143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:12:23.003719   22143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:12:23.013250   22143 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:12:23.016895   22143 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:12:23.016947   22143 kubeadm.go:392] StartCluster: {Name:addons-633765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-633765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:12:23.017009   22143 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:12:23.017045   22143 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:12:23.048771   22143 cri.go:89] found id: ""
	I1204 23:12:23.048834   22143 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:12:23.057744   22143 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:12:23.066299   22143 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:12:23.074685   22143 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:12:23.074709   22143 kubeadm.go:157] found existing configuration files:
	
	I1204 23:12:23.074747   22143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:12:23.082704   22143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:12:23.082755   22143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:12:23.090939   22143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:12:23.099625   22143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:12:23.099681   22143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:12:23.108049   22143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:12:23.115991   22143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:12:23.116062   22143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:12:23.124117   22143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:12:23.131880   22143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:12:23.131929   22143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:12:23.139942   22143 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 23:12:23.292405   22143 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:12:32.493727   22143 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:12:32.493819   22143 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:12:32.493924   22143 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:12:32.494084   22143 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:12:32.494224   22143 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:12:32.494316   22143 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:12:32.495838   22143 out.go:235]   - Generating certificates and keys ...
	I1204 23:12:32.495914   22143 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:12:32.495970   22143 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:12:32.496027   22143 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:12:32.496073   22143 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:12:32.496131   22143 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:12:32.496172   22143 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:12:32.496236   22143 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:12:32.496349   22143 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-633765 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1204 23:12:32.496394   22143 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:12:32.496493   22143 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-633765 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1204 23:12:32.496584   22143 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:12:32.496690   22143 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:12:32.496759   22143 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:12:32.496835   22143 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:12:32.496909   22143 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:12:32.496986   22143 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:12:32.497057   22143 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:12:32.497152   22143 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:12:32.497226   22143 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:12:32.497337   22143 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:12:32.497416   22143 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:12:32.498986   22143 out.go:235]   - Booting up control plane ...
	I1204 23:12:32.499069   22143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:12:32.499129   22143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:12:32.499186   22143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:12:32.499335   22143 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:12:32.499421   22143 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:12:32.499467   22143 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:12:32.499575   22143 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:12:32.499665   22143 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:12:32.499715   22143 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.833962ms
	I1204 23:12:32.499774   22143 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:12:32.499829   22143 kubeadm.go:310] [api-check] The API server is healthy after 5.001986404s
	I1204 23:12:32.499920   22143 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:12:32.500045   22143 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:12:32.500102   22143 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:12:32.500259   22143 kubeadm.go:310] [mark-control-plane] Marking the node addons-633765 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:12:32.500308   22143 kubeadm.go:310] [bootstrap-token] Using token: da6v5d.mw3ovs1qkvhy0fd5
	I1204 23:12:32.501794   22143 out.go:235]   - Configuring RBAC rules ...
	I1204 23:12:32.501908   22143 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:12:32.502066   22143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:12:32.502213   22143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:12:32.502326   22143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:12:32.502433   22143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:12:32.502520   22143 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:12:32.502621   22143 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:12:32.502660   22143 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:12:32.502702   22143 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:12:32.502709   22143 kubeadm.go:310] 
	I1204 23:12:32.502754   22143 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:12:32.502766   22143 kubeadm.go:310] 
	I1204 23:12:32.502833   22143 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:12:32.502842   22143 kubeadm.go:310] 
	I1204 23:12:32.502865   22143 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:12:32.502912   22143 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:12:32.502952   22143 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:12:32.502958   22143 kubeadm.go:310] 
	I1204 23:12:32.503004   22143 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:12:32.503010   22143 kubeadm.go:310] 
	I1204 23:12:32.503052   22143 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:12:32.503060   22143 kubeadm.go:310] 
	I1204 23:12:32.503103   22143 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:12:32.503198   22143 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:12:32.503262   22143 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:12:32.503269   22143 kubeadm.go:310] 
	I1204 23:12:32.503334   22143 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:12:32.503394   22143 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:12:32.503400   22143 kubeadm.go:310] 
	I1204 23:12:32.503474   22143 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token da6v5d.mw3ovs1qkvhy0fd5 \
	I1204 23:12:32.503565   22143 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c \
	I1204 23:12:32.503589   22143 kubeadm.go:310] 	--control-plane 
	I1204 23:12:32.503595   22143 kubeadm.go:310] 
	I1204 23:12:32.503663   22143 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:12:32.503670   22143 kubeadm.go:310] 
	I1204 23:12:32.503737   22143 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token da6v5d.mw3ovs1qkvhy0fd5 \
	I1204 23:12:32.503826   22143 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c 
	I1204 23:12:32.503835   22143 cni.go:84] Creating CNI manager for ""
	I1204 23:12:32.503841   22143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 23:12:32.505293   22143 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 23:12:32.506474   22143 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 23:12:32.516944   22143 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 23:12:32.534833   22143 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:12:32.534932   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:32.534956   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-633765 minikube.k8s.io/updated_at=2024_12_04T23_12_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=addons-633765 minikube.k8s.io/primary=true
	I1204 23:12:32.678869   22143 ops.go:34] apiserver oom_adj: -16
	I1204 23:12:32.678876   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:33.179319   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:33.679050   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:34.179561   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:34.679369   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:35.179553   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:35.679871   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:36.179303   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:36.679955   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:37.179016   22143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:12:37.340484   22143 kubeadm.go:1113] duration metric: took 4.805606583s to wait for elevateKubeSystemPrivileges
	I1204 23:12:37.340521   22143 kubeadm.go:394] duration metric: took 14.323578354s to StartCluster
	I1204 23:12:37.340542   22143 settings.go:142] acquiring lock: {Name:mkdafc0119272d33e86aa3cd16389b126b5518c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:37.340667   22143 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:12:37.341006   22143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/kubeconfig: {Name:mkdcd2310e2ff57e3c85e12df202da266115cf1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:12:37.341211   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:12:37.341234   22143 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:12:37.341289   22143 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1204 23:12:37.341401   22143 addons.go:69] Setting yakd=true in profile "addons-633765"
	I1204 23:12:37.341426   22143 addons.go:234] Setting addon yakd=true in "addons-633765"
	I1204 23:12:37.341432   22143 addons.go:69] Setting cloud-spanner=true in profile "addons-633765"
	I1204 23:12:37.341449   22143 addons.go:234] Setting addon cloud-spanner=true in "addons-633765"
	I1204 23:12:37.341448   22143 addons.go:69] Setting storage-provisioner=true in profile "addons-633765"
	I1204 23:12:37.341437   22143 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-633765"
	I1204 23:12:37.341465   22143 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-633765"
	I1204 23:12:37.341476   22143 addons.go:69] Setting volcano=true in profile "addons-633765"
	I1204 23:12:37.341479   22143 addons.go:234] Setting addon storage-provisioner=true in "addons-633765"
	I1204 23:12:37.341467   22143 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-633765"
	I1204 23:12:37.341483   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.341489   22143 addons.go:234] Setting addon volcano=true in "addons-633765"
	I1204 23:12:37.341488   22143 config.go:182] Loaded profile config "addons-633765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:12:37.341504   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.341506   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.341507   22143 addons.go:69] Setting ingress-dns=true in profile "addons-633765"
	I1204 23:12:37.341514   22143 addons.go:69] Setting default-storageclass=true in profile "addons-633765"
	I1204 23:12:37.341521   22143 addons.go:234] Setting addon ingress-dns=true in "addons-633765"
	I1204 23:12:37.341526   22143 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-633765"
	I1204 23:12:37.341491   22143 addons.go:69] Setting registry=true in profile "addons-633765"
	I1204 23:12:37.341549   22143 addons.go:234] Setting addon registry=true in "addons-633765"
	I1204 23:12:37.341562   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.341573   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.341479   22143 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-633765"
	I1204 23:12:37.341644   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.341931   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.341931   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.341938   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.341495   22143 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-633765"
	I1204 23:12:37.341972   22143 addons.go:69] Setting gcp-auth=true in profile "addons-633765"
	I1204 23:12:37.341480   22143 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-633765"
	I1204 23:12:37.341979   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.341988   22143 mustload.go:65] Loading cluster: addons-633765
	I1204 23:12:37.341995   22143 addons.go:69] Setting inspektor-gadget=true in profile "addons-633765"
	I1204 23:12:37.341999   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.342006   22143 addons.go:234] Setting addon inspektor-gadget=true in "addons-633765"
	I1204 23:12:37.341981   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342028   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.342039   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342059   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342138   22143 addons.go:69] Setting volumesnapshots=true in profile "addons-633765"
	I1204 23:12:37.342152   22143 addons.go:234] Setting addon volumesnapshots=true in "addons-633765"
	I1204 23:12:37.342159   22143 config.go:182] Loaded profile config "addons-633765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:12:37.342178   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.342259   22143 addons.go:69] Setting metrics-server=true in profile "addons-633765"
	I1204 23:12:37.342275   22143 addons.go:234] Setting addon metrics-server=true in "addons-633765"
	I1204 23:12:37.342302   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.342329   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342338   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.341404   22143 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-633765"
	I1204 23:12:37.342362   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342370   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342378   22143 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-633765"
	I1204 23:12:37.342392   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342397   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.342420   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.341457   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.342462   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342471   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342491   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.341506   22143 addons.go:69] Setting ingress=true in profile "addons-633765"
	I1204 23:12:37.342517   22143 addons.go:234] Setting addon ingress=true in "addons-633765"
	I1204 23:12:37.342027   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.341960   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342558   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342573   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342596   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342662   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342688   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342725   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342745   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342751   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.342777   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.342905   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.343097   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.343437   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.343501   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.344361   22143 out.go:177] * Verifying Kubernetes components...
	I1204 23:12:37.345880   22143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:12:37.362271   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I1204 23:12:37.374465   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42597
	I1204 23:12:37.374709   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.374751   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.375322   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.375453   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.389894   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.389938   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.389895   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.389988   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.390550   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.390613   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.390727   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.391270   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.391324   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.393645   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I1204 23:12:37.394207   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.394722   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.394744   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.395097   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.395632   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.395669   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.396009   22143 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-633765"
	I1204 23:12:37.396054   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.396410   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.396453   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.408326   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1204 23:12:37.412437   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I1204 23:12:37.412980   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.413627   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.413647   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.414062   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.415895   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.415938   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.416230   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I1204 23:12:37.416598   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I1204 23:12:37.417077   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.417612   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.417628   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.417988   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I1204 23:12:37.418326   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.418394   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.418709   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.418917   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.418938   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.419267   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.419368   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.419384   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.419468   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I1204 23:12:37.419748   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.419792   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.419821   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.419956   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41439
	I1204 23:12:37.420182   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.420677   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.420713   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.420977   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.420991   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.421129   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.421142   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.421194   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.421608   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.421625   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.421685   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.422019   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.422215   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.422259   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.422285   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.422563   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.422991   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.423053   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I1204 23:12:37.423071   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.423100   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.423589   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.423958   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.424316   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.424343   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.424643   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.424658   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.424849   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.424876   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.431621   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.431893   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.434553   22143 addons.go:234] Setting addon default-storageclass=true in "addons-633765"
	I1204 23:12:37.434594   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:37.434951   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.434986   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.437917   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I1204 23:12:37.440289   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1204 23:12:37.446734   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.447334   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.447355   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.447707   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.448296   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.448336   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.448546   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I1204 23:12:37.448885   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.449133   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33353
	I1204 23:12:37.449383   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
	I1204 23:12:37.449578   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.449591   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.449767   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.450039   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1204 23:12:37.450333   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.450345   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.451082   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.451408   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.451535   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.451547   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.451923   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.451937   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.451964   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.451978   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I1204 23:12:37.452393   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.452443   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.452486   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.452779   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.452940   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.452957   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.452971   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.453109   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.453137   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.453655   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.453690   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.453879   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.454057   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.454570   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I1204 23:12:37.454683   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.454741   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.455315   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.455418   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.455433   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.455490   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.456298   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.457489   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.457526   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.458079   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.458133   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.459578   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.459596   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.459578   22143 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1204 23:12:37.460172   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.460816   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.460854   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.460935   22143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:12:37.460995   22143 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1204 23:12:37.461119   22143 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:12:37.461135   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1204 23:12:37.461151   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.463150   22143 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1204 23:12:37.463169   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1204 23:12:37.463187   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.463812   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I1204 23:12:37.463789   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43243
	I1204 23:12:37.465252   22143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:12:37.466672   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42287
	I1204 23:12:37.466788   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.468292   22143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1204 23:12:37.469471   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.470545   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.470853   22143 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:12:37.470870   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1204 23:12:37.470886   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.471806   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I1204 23:12:37.472314   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.472863   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.472879   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.473269   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.473471   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.475040   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.475639   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.476036   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.476650   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.476667   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.476732   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.476747   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.476945   22143 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1204 23:12:37.477094   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.477122   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.478497   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37275
	I1204 23:12:37.478533   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.478596   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.478598   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.478613   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.478629   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.478635   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.478612   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.478679   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.478837   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.478892   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.478934   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.478955   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.479243   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.479555   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.479687   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.479742   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I1204 23:12:37.479878   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.479979   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.479993   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.480052   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.480243   22143 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 23:12:37.480257   22143 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 23:12:37.480295   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.480359   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I1204 23:12:37.480499   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.480882   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.481121   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.481170   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.481556   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.481626   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.481664   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.482220   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.482387   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.482405   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.482473   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.482982   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.483534   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.483688   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.483702   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.483754   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.485311   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.485506   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.486717   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.486736   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.487569   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.487612   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.490940   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.491367   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.492208   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.492589   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.492620   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.492634   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.492707   22143 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1204 23:12:37.492655   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.492995   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.493223   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.493361   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.494242   22143 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1204 23:12:37.494265   22143 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1204 23:12:37.494283   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.494293   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.494624   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1204 23:12:37.494744   22143 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1204 23:12:37.496442   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I1204 23:12:37.497089   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.497828   22143 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:12:37.497844   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1204 23:12:37.497860   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.498665   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.498852   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I1204 23:12:37.498870   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42723
	I1204 23:12:37.498983   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.499003   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.499018   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.499188   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.499199   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.499598   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I1204 23:12:37.499515   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.500026   22143 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1204 23:12:37.500112   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.500141   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.500202   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1204 23:12:37.500351   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.500530   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.500702   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.501104   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.501120   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.501183   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.501359   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.501374   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.501435   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.501644   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.501692   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.501818   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.501832   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.501912   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.501930   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.502019   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.502070   22143 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:12:37.502080   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1204 23:12:37.502207   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.502209   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.502155   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.502115   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.502823   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1204 23:12:37.503393   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.503445   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.503594   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.503764   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.503773   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.504648   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.505554   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.505658   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1204 23:12:37.506936   22143 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1204 23:12:37.506971   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.506988   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.507044   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I1204 23:12:37.507137   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:37.507147   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:37.506938   22143 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1204 23:12:37.507316   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:37.507349   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:37.507363   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:37.507370   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:37.507377   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:37.508818   22143 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1204 23:12:37.508834   22143 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1204 23:12:37.508852   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.509027   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:37.509032   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.509039   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:37.509047   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W1204 23:12:37.509107   22143 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1204 23:12:37.509636   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.510233   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.510761   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.511065   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.511224   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.511540   22143 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1204 23:12:37.512425   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.512451   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.512694   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.512752   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.512839   22143 out.go:177]   - Using image docker.io/registry:2.8.3
	I1204 23:12:37.512843   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1204 23:12:37.512892   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.513040   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.513173   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.513187   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.513245   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.513535   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.513683   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.513740   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
	I1204 23:12:37.513883   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.514101   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.514207   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.514308   22143 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1204 23:12:37.514323   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1204 23:12:37.514356   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.514088   22143 out.go:177]   - Using image docker.io/busybox:stable
	I1204 23:12:37.514703   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.514747   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I1204 23:12:37.514827   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.514841   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.515344   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.515348   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.515691   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1204 23:12:37.515761   22143 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:12:37.516011   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1204 23:12:37.516026   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.516049   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:37.516069   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.516098   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:37.516444   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1204 23:12:37.516747   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.517503   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.517771   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.517799   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.518046   22143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1204 23:12:37.518061   22143 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1204 23:12:37.518080   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.518327   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.518357   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.518662   22143 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1204 23:12:37.518672   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.518908   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.519043   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.519166   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.519613   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.519855   22143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1204 23:12:37.519874   22143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1204 23:12:37.519892   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.520234   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.520266   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.520411   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.520621   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.520753   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.520822   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.520896   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.521798   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.522150   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.522174   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.522293   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.522406   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.522460   22143 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:12:37.522573   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.522668   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.523366   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.523592   22143 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:12:37.523606   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:12:37.523620   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.523783   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.523801   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.524564   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.524748   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.525036   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.525178   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:37.527235   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.527657   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.527692   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.527780   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.527943   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.528089   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.528230   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	W1204 23:12:37.533930   22143 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48126->192.168.39.220:22: read: connection reset by peer
	I1204 23:12:37.533979   22143 retry.go:31] will retry after 328.150646ms: ssh: handshake failed: read tcp 192.168.39.1:48126->192.168.39.220:22: read: connection reset by peer
	I1204 23:12:37.538883   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41251
	I1204 23:12:37.539240   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:37.539765   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:37.539789   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:37.540132   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:37.540295   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:37.542122   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:37.542386   22143 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:12:37.542400   22143 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:12:37.542413   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:37.545112   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.545494   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:37.545525   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:37.545650   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:37.545807   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:37.545928   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:37.546126   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	W1204 23:12:37.550956   22143 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48142->192.168.39.220:22: read: connection reset by peer
	I1204 23:12:37.550986   22143 retry.go:31] will retry after 300.12687ms: ssh: handshake failed: read tcp 192.168.39.1:48142->192.168.39.220:22: read: connection reset by peer
	I1204 23:12:37.881868   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:12:37.931112   22143 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1204 23:12:37.931146   22143 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1204 23:12:37.944223   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:12:37.973588   22143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 23:12:37.973615   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1204 23:12:37.983730   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:12:37.990312   22143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1204 23:12:37.990343   22143 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1204 23:12:37.993015   22143 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1204 23:12:37.993034   22143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1204 23:12:37.997825   22143 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:12:37.997845   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1204 23:12:38.032112   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1204 23:12:38.060093   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:12:38.061445   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:12:38.072749   22143 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1204 23:12:38.072772   22143 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1204 23:12:38.092735   22143 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1204 23:12:38.092760   22143 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1204 23:12:38.114812   22143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1204 23:12:38.114841   22143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1204 23:12:38.145455   22143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:12:38.145563   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:12:38.158970   22143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 23:12:38.158997   22143 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 23:12:38.179433   22143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1204 23:12:38.179468   22143 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1204 23:12:38.203437   22143 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:12:38.203470   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1204 23:12:38.212341   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:12:38.248074   22143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1204 23:12:38.248106   22143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1204 23:12:38.267982   22143 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1204 23:12:38.268010   22143 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1204 23:12:38.297467   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:12:38.368177   22143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1204 23:12:38.368205   22143 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1204 23:12:38.391200   22143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:12:38.391222   22143 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 23:12:38.423895   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:12:38.448144   22143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1204 23:12:38.448172   22143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1204 23:12:38.459576   22143 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:12:38.459606   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1204 23:12:38.476541   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:12:38.567174   22143 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1204 23:12:38.567209   22143 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1204 23:12:38.583486   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:12:38.610304   22143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1204 23:12:38.610327   22143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1204 23:12:38.616050   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:12:38.737999   22143 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:12:38.738026   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1204 23:12:38.785801   22143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1204 23:12:38.785825   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1204 23:12:38.911267   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:12:39.005436   22143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1204 23:12:39.005464   22143 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1204 23:12:39.206792   22143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1204 23:12:39.206828   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1204 23:12:39.573809   22143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1204 23:12:39.573834   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1204 23:12:39.928030   22143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:12:39.928055   22143 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1204 23:12:40.432472   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:12:40.866451   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.984543556s)
	I1204 23:12:40.866485   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.922225638s)
	I1204 23:12:40.866511   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:40.866524   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:40.866527   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:40.866540   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:40.866807   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:40.866823   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:40.866827   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:40.866858   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:40.866871   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:40.866882   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:40.866895   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:40.866832   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:40.866946   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:40.867109   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:40.867122   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:40.868417   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:40.868424   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:40.868430   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:42.382644   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.398875299s)
	I1204 23:12:42.382694   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.350547124s)
	I1204 23:12:42.382705   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:42.382720   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:42.382732   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:42.382743   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:42.384233   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:42.384247   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:42.384260   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:42.384268   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:42.384275   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:42.384233   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:42.384237   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:42.384331   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:42.384345   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:42.384352   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:42.384575   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:42.384591   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:42.384605   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:42.384612   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:42.384612   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:42.384627   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:42.464578   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:42.464606   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:42.464875   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:42.464932   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:42.464950   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:44.494701   22143 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1204 23:12:44.494740   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:44.497922   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:44.498442   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:44.498469   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:44.498653   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:44.498861   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:44.499032   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:44.499182   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:44.729291   22143 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1204 23:12:44.858129   22143 addons.go:234] Setting addon gcp-auth=true in "addons-633765"
	I1204 23:12:44.858186   22143 host.go:66] Checking if "addons-633765" exists ...
	I1204 23:12:44.858603   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:44.858653   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:44.874448   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I1204 23:12:44.874928   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:44.875468   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:44.875499   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:44.875804   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:44.876254   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:12:44.876294   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:12:44.890966   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43449
	I1204 23:12:44.891474   22143 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:12:44.891966   22143 main.go:141] libmachine: Using API Version  1
	I1204 23:12:44.892003   22143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:12:44.892328   22143 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:12:44.892524   22143 main.go:141] libmachine: (addons-633765) Calling .GetState
	I1204 23:12:44.894036   22143 main.go:141] libmachine: (addons-633765) Calling .DriverName
	I1204 23:12:44.894223   22143 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1204 23:12:44.894250   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHHostname
	I1204 23:12:44.896983   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:44.897490   22143 main.go:141] libmachine: (addons-633765) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:d1:10", ip: ""} in network mk-addons-633765: {Iface:virbr1 ExpiryTime:2024-12-05 00:12:10 +0000 UTC Type:0 Mac:52:54:00:5d:d1:10 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-633765 Clientid:01:52:54:00:5d:d1:10}
	I1204 23:12:44.897522   22143 main.go:141] libmachine: (addons-633765) DBG | domain addons-633765 has defined IP address 192.168.39.220 and MAC address 52:54:00:5d:d1:10 in network mk-addons-633765
	I1204 23:12:44.897747   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHPort
	I1204 23:12:44.897889   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHKeyPath
	I1204 23:12:44.898030   22143 main.go:141] libmachine: (addons-633765) Calling .GetSSHUsername
	I1204 23:12:44.898157   22143 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/addons-633765/id_rsa Username:docker}
	I1204 23:12:45.463968   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.403836744s)
	I1204 23:12:45.464031   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464044   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464040   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.402563969s)
	I1204 23:12:45.464078   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464093   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464163   22143 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.318681041s)
	I1204 23:12:45.464241   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.251867846s)
	I1204 23:12:45.464265   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464279   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464341   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.166823486s)
	I1204 23:12:45.464374   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464390   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464454   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.040534842s)
	I1204 23:12:45.464480   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.464481   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464497   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464498   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.464527   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.98795923s)
	I1204 23:12:45.464547   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464549   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.464553   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.464558   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464560   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.464562   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.464571   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464573   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464578   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464581   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464680   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.881164933s)
	I1204 23:12:45.464698   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464709   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464766   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.464798   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.464804   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.464812   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.464818   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.465036   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.848948563s)
	I1204 23:12:45.465064   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.465074   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.465212   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.55390912s)
	W1204 23:12:45.465274   22143 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:12:45.465294   22143 retry.go:31] will retry after 248.781826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:12:45.465354   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.465379   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.465387   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.465873   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.465903   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.465917   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.465925   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.465934   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.466010   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.466035   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.466041   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.466048   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.466054   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.466096   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.466116   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.466122   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.466314   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.466349   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.466367   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.466373   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.466382   22143 addons.go:475] Verifying addon ingress=true in "addons-633765"
	I1204 23:12:45.466759   22143 node_ready.go:35] waiting up to 6m0s for node "addons-633765" to be "Ready" ...
	I1204 23:12:45.466906   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.466945   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.466952   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.466985   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.466988   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.467001   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.467031   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.467035   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.467044   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.467052   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.467060   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.467081   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.467095   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.467103   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.467109   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.464163   22143 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.318550393s)
	I1204 23:12:45.467238   22143 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 23:12:45.467637   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.467669   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.467677   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.467685   22143 addons.go:475] Verifying addon metrics-server=true in "addons-633765"
	I1204 23:12:45.467797   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.467826   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.467833   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.468224   22143 out.go:177] * Verifying ingress addon...
	I1204 23:12:45.468535   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.468573   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.468580   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.468588   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.468594   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.468855   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.468892   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.468898   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.468910   22143 addons.go:475] Verifying addon registry=true in "addons-633765"
	I1204 23:12:45.470250   22143 out.go:177] * Verifying registry addon...
	I1204 23:12:45.470263   22143 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-633765 service yakd-dashboard -n yakd-dashboard
	
	I1204 23:12:45.471064   22143 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1204 23:12:45.472683   22143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1204 23:12:45.479599   22143 node_ready.go:49] node "addons-633765" has status "Ready":"True"
	I1204 23:12:45.479618   22143 node_ready.go:38] duration metric: took 12.840529ms for node "addons-633765" to be "Ready" ...
	I1204 23:12:45.479625   22143 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:12:45.482870   22143 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1204 23:12:45.482894   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.485579   22143 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:12:45.485595   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:45.508948   22143 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:45.539107   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:45.539129   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:45.539467   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:45.539518   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:45.539485   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:45.714694   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:12:45.971184   22143 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-633765" context rescaled to 1 replicas
	I1204 23:12:45.975270   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.978358   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:46.333702   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.90117835s)
	I1204 23:12:46.333761   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:46.333783   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:46.333761   22143 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.439515104s)
	I1204 23:12:46.334140   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:46.334159   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:46.334168   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:46.334176   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:46.334181   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:46.334384   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:46.334398   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:46.334408   22143 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-633765"
	I1204 23:12:46.336168   22143 out.go:177] * Verifying csi-hostpath-driver addon...
	I1204 23:12:46.336190   22143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:12:46.337777   22143 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1204 23:12:46.338491   22143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1204 23:12:46.338974   22143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1204 23:12:46.338994   22143 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1204 23:12:46.346769   22143 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:12:46.346794   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.476837   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.478990   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:46.485859   22143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1204 23:12:46.485885   22143 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1204 23:12:46.618557   22143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:12:46.618580   22143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1204 23:12:46.801069   22143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:12:46.844708   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.975098   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.976839   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:47.343002   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.475213   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.477024   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:47.502848   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.788096804s)
	I1204 23:12:47.502911   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:47.502930   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:47.503190   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:47.503226   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:47.503239   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:47.503249   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:47.503257   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:47.503473   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:47.503508   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:47.503528   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:47.516072   22143 pod_ready.go:103] pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:47.850259   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.013357   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:48.013952   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.043111   22143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.241994585s)
	I1204 23:12:48.043164   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:48.043180   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:48.043542   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:48.043563   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:48.043583   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:48.043635   22143 main.go:141] libmachine: Making call to close driver server
	I1204 23:12:48.043647   22143 main.go:141] libmachine: (addons-633765) Calling .Close
	I1204 23:12:48.043875   22143 main.go:141] libmachine: (addons-633765) DBG | Closing plugin on server side
	I1204 23:12:48.043887   22143 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:12:48.043899   22143 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:12:48.045756   22143 addons.go:475] Verifying addon gcp-auth=true in "addons-633765"
	I1204 23:12:48.047939   22143 out.go:177] * Verifying gcp-auth addon...
	I1204 23:12:48.050335   22143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1204 23:12:48.084408   22143 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1204 23:12:48.084431   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.343490   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.475503   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.476305   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:48.553764   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.842996   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.975187   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.976872   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:49.054199   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:49.343177   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:49.475700   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.477380   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:49.554402   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:49.844050   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:49.978887   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:49.979554   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.015451   22143 pod_ready.go:103] pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:50.057978   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.342968   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:50.476093   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.477251   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:50.574578   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.843724   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:50.975941   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.977047   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:51.054342   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.343414   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:51.476320   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.476373   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:51.554064   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.843409   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:51.975650   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.977360   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:52.054015   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.343188   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:52.475560   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.477586   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:52.514197   22143 pod_ready.go:103] pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:52.554086   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.944259   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:52.975515   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.977175   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:53.053208   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.344395   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:53.476006   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.476569   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:53.554307   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.842909   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:53.975795   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.976426   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:54.056228   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.343170   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:54.475716   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.476233   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:54.520049   22143 pod_ready.go:103] pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:54.567977   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.846331   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:54.977133   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.977140   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:55.053838   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.343132   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:55.476052   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.476160   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:55.553812   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.961244   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:55.976060   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.979477   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:56.053158   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.348432   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:56.475650   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.476504   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:56.553977   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.857330   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:56.978414   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:56.978572   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.017425   22143 pod_ready.go:103] pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:57.055298   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.343360   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:57.475199   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.476307   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:57.553585   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.844424   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:57.976463   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.976630   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:58.053710   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.342440   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:58.476801   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.477471   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:58.556096   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.843850   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:58.975576   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.980773   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:59.053857   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.343512   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:59.475191   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.476414   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:59.515066   22143 pod_ready.go:103] pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:59.553792   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.842991   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:59.976115   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.977481   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:00.055439   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.344591   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:00.476567   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.477275   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:00.559259   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.843794   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:00.975748   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.976247   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:01.054319   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.343367   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:01.476474   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.476771   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:01.555376   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.843585   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:01.976284   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.977217   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:02.017786   22143 pod_ready.go:103] pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:02.053208   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.343687   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:02.475446   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.477158   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:02.514898   22143 pod_ready.go:93] pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:02.514919   22143 pod_ready.go:82] duration metric: took 17.005948339s for pod "amd-gpu-device-plugin-jdqtc" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.514928   22143 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-49pqn" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.516873   22143 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-49pqn" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-49pqn" not found
	I1204 23:13:02.516897   22143 pod_ready.go:82] duration metric: took 1.96201ms for pod "coredns-7c65d6cfc9-49pqn" in "kube-system" namespace to be "Ready" ...
	E1204 23:13:02.516910   22143 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-49pqn" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-49pqn" not found
	I1204 23:13:02.516919   22143 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jjc7h" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.521252   22143 pod_ready.go:93] pod "coredns-7c65d6cfc9-jjc7h" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:02.521268   22143 pod_ready.go:82] duration metric: took 4.342182ms for pod "coredns-7c65d6cfc9-jjc7h" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.521275   22143 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-633765" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.525042   22143 pod_ready.go:93] pod "etcd-addons-633765" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:02.525057   22143 pod_ready.go:82] duration metric: took 3.776055ms for pod "etcd-addons-633765" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.525064   22143 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-633765" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.528767   22143 pod_ready.go:93] pod "kube-apiserver-addons-633765" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:02.528782   22143 pod_ready.go:82] duration metric: took 3.711123ms for pod "kube-apiserver-addons-633765" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.528789   22143 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-633765" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.553921   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.712956   22143 pod_ready.go:93] pod "kube-controller-manager-addons-633765" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:02.712983   22143 pod_ready.go:82] duration metric: took 184.187223ms for pod "kube-controller-manager-addons-633765" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.712999   22143 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b59p5" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:02.842828   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:02.976813   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.977411   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:03.076571   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.113597   22143 pod_ready.go:93] pod "kube-proxy-b59p5" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:03.113620   22143 pod_ready.go:82] duration metric: took 400.614448ms for pod "kube-proxy-b59p5" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:03.113631   22143 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-633765" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:03.343939   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:03.475379   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.477454   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:03.513791   22143 pod_ready.go:93] pod "kube-scheduler-addons-633765" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:03.513816   22143 pod_ready.go:82] duration metric: took 400.178945ms for pod "kube-scheduler-addons-633765" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:03.513830   22143 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:03.554222   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.842769   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:03.977468   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:03.977775   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:04.054096   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:04.342907   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:04.475112   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:04.477683   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:04.554688   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:04.843971   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:04.975848   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:04.976075   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:05.345383   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:05.347774   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:05.475473   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.478883   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:05.519831   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:05.554732   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:05.842826   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:05.977350   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:05.979088   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:06.054198   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:06.343748   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:06.476393   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:06.478083   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:06.554270   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:06.843323   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:06.975566   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:06.976336   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:07.053933   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:07.343133   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:07.475669   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.476630   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:07.553638   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:07.842502   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:07.975702   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.976383   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:08.019779   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:08.053156   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:08.343472   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:08.476374   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:08.476705   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.553965   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:08.843380   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:08.975451   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.976975   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:09.053556   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:09.342368   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:09.478572   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.487626   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:09.553799   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:09.842655   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:09.975536   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.975935   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:10.054366   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:10.343379   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:10.475297   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:10.477667   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:10.519666   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:10.553237   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:10.848347   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:10.975538   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:10.976692   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:11.053098   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:11.343742   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:11.477553   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:11.478301   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:11.554265   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:11.843164   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:11.975560   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:11.977579   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:12.053380   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:12.343459   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:12.476091   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:12.477580   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:12.520603   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:12.553455   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:12.843482   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:12.975551   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:12.975994   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:13.053779   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:13.342921   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:13.477050   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:13.477236   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:13.554457   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:13.843630   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:13.975679   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:13.976024   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:14.059203   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:14.343495   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:14.475662   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:14.477094   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:14.553465   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:14.842764   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:14.975936   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:14.976690   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:15.021036   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:15.053978   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:15.342879   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:15.475249   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:15.476458   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:15.575520   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:15.844045   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:15.975519   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:15.976604   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:16.053185   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:16.343762   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:16.483081   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:16.483265   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:16.581872   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:16.842739   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:16.975670   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:16.976582   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:17.053981   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:17.348638   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:17.475110   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:17.476967   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:17.522385   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:17.554064   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:17.842673   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:17.975843   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:17.977518   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:18.052994   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:18.343314   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:18.477978   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:18.478245   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:18.553809   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:18.842688   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:18.975450   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:18.976167   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:19.053320   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:19.343270   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:19.475479   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:19.475631   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:19.575528   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:19.845677   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:19.975519   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:19.976433   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:20.022038   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:20.053491   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:20.343234   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:20.474955   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:20.476155   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:20.553538   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:20.843507   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:20.975586   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:20.976126   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:21.053605   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:21.344624   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:21.476897   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:21.476946   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:21.576545   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:21.842725   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:21.975404   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:21.976927   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:22.053862   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:22.343151   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:22.476075   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:22.477307   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:22.518909   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:22.554042   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:22.843955   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:22.975004   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:22.976368   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:23.054026   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:23.343273   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:23.476048   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:23.478282   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:13:23.576078   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:23.842999   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:23.975338   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:23.976215   22143 kapi.go:107] duration metric: took 38.503529998s to wait for kubernetes.io/minikube-addons=registry ...
	I1204 23:13:24.053739   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:24.343153   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:24.477605   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:24.520563   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:24.553891   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:24.843362   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:24.974791   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:25.054572   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:25.344636   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:25.477128   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:25.576020   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:25.844661   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:25.976841   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:26.054445   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:26.343605   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:26.475620   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:26.553927   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:26.844215   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:26.975261   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:27.021054   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:27.054405   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:27.343613   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:27.475193   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:27.556678   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:27.843074   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:27.976085   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:28.053044   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:28.343286   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:28.475499   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:28.554162   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:28.843755   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:28.975490   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:29.053098   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:29.342967   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:29.474325   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:29.570349   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:29.574840   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:29.843265   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:29.975804   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:30.054370   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:30.343685   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:30.474963   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:30.554061   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:30.842880   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:30.974667   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:31.053528   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:31.452642   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:31.475594   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:31.675907   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:31.843062   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:31.976456   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:32.020232   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:32.053741   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:32.342890   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:32.476217   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:32.554508   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:32.844721   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:32.977591   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:33.055183   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:33.343829   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:33.475402   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:33.553833   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:33.843502   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:33.975927   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:34.020371   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:34.054077   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:34.343467   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:34.478449   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:34.554342   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:34.842669   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:34.974989   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:35.053407   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:35.343769   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:35.475624   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:35.554070   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:35.843989   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:35.976226   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:36.053410   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:36.343836   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:36.475815   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:36.519246   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:36.554589   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:36.843900   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:36.976790   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:37.054400   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:37.343791   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:37.475764   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:37.553482   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:37.843946   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:37.975750   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:38.055147   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:38.343612   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:38.482149   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:38.526578   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:38.581887   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:38.843725   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:38.978447   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:39.054361   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:39.343271   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:39.474985   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:39.554158   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:39.895284   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:39.975777   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:40.053140   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:40.346418   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:40.475324   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:40.553762   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:40.842636   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:40.975927   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:41.019591   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:41.052720   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:41.343632   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:41.475383   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:41.554176   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:41.981136   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:41.982154   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:42.076987   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:42.344531   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:42.475724   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:42.553420   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:42.844092   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:42.982185   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:43.020428   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:43.054042   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:43.343120   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:43.475261   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:43.566017   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:43.852771   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:43.975639   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:44.075223   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:44.344285   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:44.475433   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:44.554053   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:44.844168   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:44.975441   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:45.053288   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:45.344056   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:45.476674   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:45.522805   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:45.585640   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:45.845463   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:45.975462   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:46.075298   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:46.343339   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:46.475780   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:46.554186   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:46.846056   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:46.975292   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:47.053840   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:47.348733   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:47.475531   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:47.553737   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:47.842967   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:48.462632   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:48.463496   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:48.464381   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:48.478346   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:48.478865   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:48.555031   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:48.843450   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:48.976494   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:49.075133   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:49.354506   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:49.475550   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:49.553668   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:49.842931   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:49.977305   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:50.077740   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:50.342618   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:50.474956   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:50.520285   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:50.554670   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:50.842416   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:50.978262   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:51.054347   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:51.343923   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:51.474663   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:51.553791   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:51.843120   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:51.974797   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:52.054653   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:52.343065   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:52.475812   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:52.523096   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:52.554141   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:52.844297   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:52.975538   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:53.054326   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:53.360817   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:53.797675   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:53.899778   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:53.900146   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:53.974943   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:54.060404   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:54.344090   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:54.475543   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:54.553987   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:54.843347   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:54.975406   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:55.019447   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:55.053909   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:55.343206   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:55.475242   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:55.556191   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:55.843044   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:55.974786   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:56.053371   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:56.445525   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:56.480463   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:56.563974   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:56.843604   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:56.976312   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:57.019886   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:57.053758   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:57.342653   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:57.476510   22143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:57.554289   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:57.843781   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:57.975966   22143 kapi.go:107] duration metric: took 1m12.504895041s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1204 23:13:58.054181   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:58.343752   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:58.567143   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:58.843558   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:59.054607   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:59.344136   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:13:59.519605   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:59.553335   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:59.843099   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:14:00.054207   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:14:00.343343   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:14:00.554581   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:14:00.844268   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:14:01.054154   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:14:01.343847   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:14:01.520997   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:01.553679   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:14:01.843835   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:14:02.055057   22143 kapi.go:107] duration metric: took 1m14.004719544s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1204 23:14:02.056889   22143 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-633765 cluster.
	I1204 23:14:02.058145   22143 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1204 23:14:02.059458   22143 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1204 23:14:02.343517   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:14:02.844412   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:14:03.343911   22143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:14:03.522323   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:03.843805   22143 kapi.go:107] duration metric: took 1m17.505314413s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1204 23:14:03.845607   22143 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner-rancher, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1204 23:14:03.847170   22143 addons.go:510] duration metric: took 1m26.505882286s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner-rancher amd-gpu-device-plugin storage-provisioner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1204 23:14:06.020832   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:08.523149   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:11.020174   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:13.523182   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:16.019938   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:18.025194   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:20.520652   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:23.020228   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:25.520227   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:28.020322   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:30.520418   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:32.521468   22143 pod_ready.go:103] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"False"
	I1204 23:14:33.021417   22143 pod_ready.go:93] pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace has status "Ready":"True"
	I1204 23:14:33.021443   22143 pod_ready.go:82] duration metric: took 1m29.507606306s for pod "metrics-server-84c5f94fbc-k4bz2" in "kube-system" namespace to be "Ready" ...
	I1204 23:14:33.021453   22143 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-l6p4n" in "kube-system" namespace to be "Ready" ...
	I1204 23:14:33.026409   22143 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-l6p4n" in "kube-system" namespace has status "Ready":"True"
	I1204 23:14:33.026426   22143 pod_ready.go:82] duration metric: took 4.966925ms for pod "nvidia-device-plugin-daemonset-l6p4n" in "kube-system" namespace to be "Ready" ...
	I1204 23:14:33.026445   22143 pod_ready.go:39] duration metric: took 1m47.546810788s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:14:33.026459   22143 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:14:33.026483   22143 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:14:33.026528   22143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:14:33.081358   22143 cri.go:89] found id: "f8abd02bd400a4b7f8249327bbcc2a1c2d0f41f9957e16adcb0cade76935a758"
	I1204 23:14:33.081389   22143 cri.go:89] found id: ""
	I1204 23:14:33.081399   22143 logs.go:282] 1 containers: [f8abd02bd400a4b7f8249327bbcc2a1c2d0f41f9957e16adcb0cade76935a758]
	I1204 23:14:33.081462   22143 ssh_runner.go:195] Run: which crictl
	I1204 23:14:33.085809   22143 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:14:33.085869   22143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:14:33.122334   22143 cri.go:89] found id: "96533a739f389fb46f849944dceb202b79ea3dbdd5d97eda9881f68ba6809c4c"
	I1204 23:14:33.122362   22143 cri.go:89] found id: ""
	I1204 23:14:33.122372   22143 logs.go:282] 1 containers: [96533a739f389fb46f849944dceb202b79ea3dbdd5d97eda9881f68ba6809c4c]
	I1204 23:14:33.122431   22143 ssh_runner.go:195] Run: which crictl
	I1204 23:14:33.126182   22143 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:14:33.126240   22143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:14:33.162599   22143 cri.go:89] found id: "4678b1329db8e23d65a5a71be8607305f081ae67a319af2040992459730868de"
	I1204 23:14:33.162632   22143 cri.go:89] found id: ""
	I1204 23:14:33.162644   22143 logs.go:282] 1 containers: [4678b1329db8e23d65a5a71be8607305f081ae67a319af2040992459730868de]
	I1204 23:14:33.162700   22143 ssh_runner.go:195] Run: which crictl
	I1204 23:14:33.166717   22143 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:14:33.166782   22143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:14:33.207350   22143 cri.go:89] found id: "11a5fc32b8e2530101b3dc03c0daf8b13d9e1982adb48ca534636520dd1383ca"
	I1204 23:14:33.207378   22143 cri.go:89] found id: ""
	I1204 23:14:33.207389   22143 logs.go:282] 1 containers: [11a5fc32b8e2530101b3dc03c0daf8b13d9e1982adb48ca534636520dd1383ca]
	I1204 23:14:33.207447   22143 ssh_runner.go:195] Run: which crictl
	I1204 23:14:33.211898   22143 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:14:33.211962   22143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:14:33.252197   22143 cri.go:89] found id: "165346eec97d03565650d7696098d6f23341eebb169c8e9325c655cb24176311"
	I1204 23:14:33.252226   22143 cri.go:89] found id: ""
	I1204 23:14:33.252237   22143 logs.go:282] 1 containers: [165346eec97d03565650d7696098d6f23341eebb169c8e9325c655cb24176311]
	I1204 23:14:33.252289   22143 ssh_runner.go:195] Run: which crictl
	I1204 23:14:33.256677   22143 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:14:33.256747   22143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:14:33.303674   22143 cri.go:89] found id: "d97c84f748b985b9b48cb23afffabf8b5e4efe8607224b70d1b8ce6149c1f99f"
	I1204 23:14:33.303697   22143 cri.go:89] found id: ""
	I1204 23:14:33.303705   22143 logs.go:282] 1 containers: [d97c84f748b985b9b48cb23afffabf8b5e4efe8607224b70d1b8ce6149c1f99f]
	I1204 23:14:33.303751   22143 ssh_runner.go:195] Run: which crictl
	I1204 23:14:33.308085   22143 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:14:33.308142   22143 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:14:33.345672   22143 cri.go:89] found id: ""
	I1204 23:14:33.345707   22143 logs.go:282] 0 containers: []
	W1204 23:14:33.345719   22143 logs.go:284] No container was found matching "kindnet"
	I1204 23:14:33.345731   22143 logs.go:123] Gathering logs for container status ...
	I1204 23:14:33.345745   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:14:33.392145   22143 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:14:33.392226   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:14:33.657604   22143 logs.go:123] Gathering logs for coredns [4678b1329db8e23d65a5a71be8607305f081ae67a319af2040992459730868de] ...
	I1204 23:14:33.657637   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4678b1329db8e23d65a5a71be8607305f081ae67a319af2040992459730868de"
	I1204 23:14:33.700513   22143 logs.go:123] Gathering logs for kube-proxy [165346eec97d03565650d7696098d6f23341eebb169c8e9325c655cb24176311] ...
	I1204 23:14:33.700547   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 165346eec97d03565650d7696098d6f23341eebb169c8e9325c655cb24176311"
	I1204 23:14:33.740155   22143 logs.go:123] Gathering logs for kube-controller-manager [d97c84f748b985b9b48cb23afffabf8b5e4efe8607224b70d1b8ce6149c1f99f] ...
	I1204 23:14:33.740185   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97c84f748b985b9b48cb23afffabf8b5e4efe8607224b70d1b8ce6149c1f99f"
	I1204 23:14:33.799222   22143 logs.go:123] Gathering logs for kube-scheduler [11a5fc32b8e2530101b3dc03c0daf8b13d9e1982adb48ca534636520dd1383ca] ...
	I1204 23:14:33.799257   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11a5fc32b8e2530101b3dc03c0daf8b13d9e1982adb48ca534636520dd1383ca"
	I1204 23:14:33.849127   22143 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:14:33.849158   22143 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-linux-amd64 start -p addons-633765 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 node stop m02 -v=7 --alsologtostderr
E1205 00:00:27.511104   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:47.992810   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:01:28.954166   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-168758 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.471888479s)

                                                
                                                
-- stdout --
	* Stopping node "ha-168758-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 00:00:27.340015   40609 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:00:27.340166   40609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:00:27.340176   40609 out.go:358] Setting ErrFile to fd 2...
	I1205 00:00:27.340180   40609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:00:27.340395   40609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:00:27.340631   40609 mustload.go:65] Loading cluster: ha-168758
	I1205 00:00:27.341019   40609 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:00:27.341037   40609 stop.go:39] StopHost: ha-168758-m02
	I1205 00:00:27.341366   40609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:00:27.341408   40609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:00:27.356985   40609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46445
	I1205 00:00:27.357487   40609 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:00:27.358123   40609 main.go:141] libmachine: Using API Version  1
	I1205 00:00:27.358148   40609 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:00:27.358509   40609 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:00:27.360510   40609 out.go:177] * Stopping node "ha-168758-m02"  ...
	I1205 00:00:27.361768   40609 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 00:00:27.361792   40609 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1205 00:00:27.362095   40609 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 00:00:27.362141   40609 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1205 00:00:27.365191   40609 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1205 00:00:27.365607   40609 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1205 00:00:27.365639   40609 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1205 00:00:27.365789   40609 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1205 00:00:27.365959   40609 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1205 00:00:27.366091   40609 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1205 00:00:27.366217   40609 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1205 00:00:27.452798   40609 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 00:00:27.505723   40609 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 00:00:27.559819   40609 main.go:141] libmachine: Stopping "ha-168758-m02"...
	I1205 00:00:27.559851   40609 main.go:141] libmachine: (ha-168758-m02) Calling .GetState
	I1205 00:00:27.561666   40609 main.go:141] libmachine: (ha-168758-m02) Calling .Stop
	I1205 00:00:27.565245   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 0/120
	I1205 00:00:28.566735   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 1/120
	I1205 00:00:29.568597   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 2/120
	I1205 00:00:30.570141   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 3/120
	I1205 00:00:31.572489   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 4/120
	I1205 00:00:32.574337   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 5/120
	I1205 00:00:33.576465   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 6/120
	I1205 00:00:34.577799   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 7/120
	I1205 00:00:35.579492   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 8/120
	I1205 00:00:36.580817   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 9/120
	I1205 00:00:37.583013   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 10/120
	I1205 00:00:38.585141   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 11/120
	I1205 00:00:39.586521   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 12/120
	I1205 00:00:40.587955   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 13/120
	I1205 00:00:41.590142   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 14/120
	I1205 00:00:42.592019   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 15/120
	I1205 00:00:43.593375   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 16/120
	I1205 00:00:44.594650   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 17/120
	I1205 00:00:45.596596   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 18/120
	I1205 00:00:46.598426   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 19/120
	I1205 00:00:47.600380   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 20/120
	I1205 00:00:48.601909   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 21/120
	I1205 00:00:49.603198   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 22/120
	I1205 00:00:50.604470   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 23/120
	I1205 00:00:51.605881   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 24/120
	I1205 00:00:52.608015   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 25/120
	I1205 00:00:53.609490   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 26/120
	I1205 00:00:54.611111   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 27/120
	I1205 00:00:55.612476   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 28/120
	I1205 00:00:56.613837   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 29/120
	I1205 00:00:57.616063   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 30/120
	I1205 00:00:58.617638   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 31/120
	I1205 00:00:59.619127   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 32/120
	I1205 00:01:00.620464   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 33/120
	I1205 00:01:01.621876   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 34/120
	I1205 00:01:02.623864   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 35/120
	I1205 00:01:03.625267   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 36/120
	I1205 00:01:04.626905   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 37/120
	I1205 00:01:05.628942   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 38/120
	I1205 00:01:06.630399   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 39/120
	I1205 00:01:07.632278   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 40/120
	I1205 00:01:08.633591   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 41/120
	I1205 00:01:09.634912   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 42/120
	I1205 00:01:10.636344   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 43/120
	I1205 00:01:11.637687   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 44/120
	I1205 00:01:12.639587   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 45/120
	I1205 00:01:13.640904   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 46/120
	I1205 00:01:14.642443   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 47/120
	I1205 00:01:15.644617   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 48/120
	I1205 00:01:16.646244   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 49/120
	I1205 00:01:17.648435   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 50/120
	I1205 00:01:18.649702   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 51/120
	I1205 00:01:19.651044   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 52/120
	I1205 00:01:20.652512   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 53/120
	I1205 00:01:21.653658   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 54/120
	I1205 00:01:22.655445   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 55/120
	I1205 00:01:23.657479   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 56/120
	I1205 00:01:24.659679   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 57/120
	I1205 00:01:25.660965   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 58/120
	I1205 00:01:26.662622   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 59/120
	I1205 00:01:27.664459   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 60/120
	I1205 00:01:28.665591   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 61/120
	I1205 00:01:29.666765   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 62/120
	I1205 00:01:30.669092   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 63/120
	I1205 00:01:31.670657   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 64/120
	I1205 00:01:32.672484   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 65/120
	I1205 00:01:33.673977   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 66/120
	I1205 00:01:34.675383   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 67/120
	I1205 00:01:35.676888   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 68/120
	I1205 00:01:36.678302   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 69/120
	I1205 00:01:37.680392   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 70/120
	I1205 00:01:38.681720   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 71/120
	I1205 00:01:39.683124   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 72/120
	I1205 00:01:40.684337   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 73/120
	I1205 00:01:41.686136   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 74/120
	I1205 00:01:42.688168   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 75/120
	I1205 00:01:43.689526   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 76/120
	I1205 00:01:44.690812   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 77/120
	I1205 00:01:45.692466   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 78/120
	I1205 00:01:46.694194   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 79/120
	I1205 00:01:47.696135   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 80/120
	I1205 00:01:48.697587   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 81/120
	I1205 00:01:49.698875   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 82/120
	I1205 00:01:50.700432   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 83/120
	I1205 00:01:51.702566   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 84/120
	I1205 00:01:52.704426   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 85/120
	I1205 00:01:53.705965   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 86/120
	I1205 00:01:54.707507   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 87/120
	I1205 00:01:55.709005   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 88/120
	I1205 00:01:56.710454   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 89/120
	I1205 00:01:57.712371   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 90/120
	I1205 00:01:58.713770   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 91/120
	I1205 00:01:59.715122   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 92/120
	I1205 00:02:00.716634   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 93/120
	I1205 00:02:01.718076   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 94/120
	I1205 00:02:02.720153   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 95/120
	I1205 00:02:03.722313   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 96/120
	I1205 00:02:04.723615   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 97/120
	I1205 00:02:05.724839   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 98/120
	I1205 00:02:06.726288   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 99/120
	I1205 00:02:07.728440   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 100/120
	I1205 00:02:08.729879   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 101/120
	I1205 00:02:09.731667   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 102/120
	I1205 00:02:10.732964   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 103/120
	I1205 00:02:11.735288   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 104/120
	I1205 00:02:12.737631   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 105/120
	I1205 00:02:13.739344   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 106/120
	I1205 00:02:14.740801   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 107/120
	I1205 00:02:15.742379   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 108/120
	I1205 00:02:16.744695   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 109/120
	I1205 00:02:17.746979   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 110/120
	I1205 00:02:18.748387   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 111/120
	I1205 00:02:19.749601   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 112/120
	I1205 00:02:20.751286   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 113/120
	I1205 00:02:21.752585   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 114/120
	I1205 00:02:22.754602   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 115/120
	I1205 00:02:23.756415   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 116/120
	I1205 00:02:24.759018   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 117/120
	I1205 00:02:25.760388   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 118/120
	I1205 00:02:26.761936   40609 main.go:141] libmachine: (ha-168758-m02) Waiting for machine to stop 119/120
	I1205 00:02:27.763014   40609 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 00:02:27.763146   40609 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-168758 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr: (18.723416465s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-168758 -n ha-168758
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 logs -n 25: (1.324185628s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m03_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m04 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp testdata/cp-test.txt                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m04_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03:/home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m03 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-168758 node stop m02 -v=7                                                     | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:55:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:55:54.691284   36539 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:55:54.691414   36539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:54.691424   36539 out.go:358] Setting ErrFile to fd 2...
	I1204 23:55:54.691432   36539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:54.691628   36539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1204 23:55:54.692240   36539 out.go:352] Setting JSON to false
	I1204 23:55:54.693153   36539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5901,"bootTime":1733350654,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:55:54.693247   36539 start.go:139] virtualization: kvm guest
	I1204 23:55:54.695552   36539 out.go:177] * [ha-168758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:55:54.696955   36539 notify.go:220] Checking for updates...
	I1204 23:55:54.696986   36539 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:55:54.698203   36539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:55:54.699583   36539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:55:54.700898   36539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:54.702193   36539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:55:54.703640   36539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:55:54.705053   36539 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:55:54.739286   36539 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 23:55:54.740386   36539 start.go:297] selected driver: kvm2
	I1204 23:55:54.740398   36539 start.go:901] validating driver "kvm2" against <nil>
	I1204 23:55:54.740409   36539 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:55:54.741134   36539 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:55:54.741212   36539 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 23:55:54.755884   36539 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 23:55:54.755947   36539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:55:54.756215   36539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:55:54.756246   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:55:54.756288   36539 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 23:55:54.756296   36539 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:55:54.756346   36539 start.go:340] cluster config:
	{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1204 23:55:54.756434   36539 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:55:54.758105   36539 out.go:177] * Starting "ha-168758" primary control-plane node in "ha-168758" cluster
	I1204 23:55:54.759313   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:55:54.759357   36539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:55:54.759363   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:55:54.759483   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:55:54.759495   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:55:54.759812   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:55:54.759833   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json: {Name:mk6e7aaeeb71d483ab659741dd4a36c7ed0cf735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:55:54.759956   36539 start.go:360] acquireMachinesLock for ha-168758: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:55:54.759986   36539 start.go:364] duration metric: took 15.99µs to acquireMachinesLock for "ha-168758"
	I1204 23:55:54.759999   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:55:54.760065   36539 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 23:55:54.761682   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:55:54.761785   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:55:54.761819   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:55:54.775974   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I1204 23:55:54.776416   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:55:54.776960   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:55:54.776978   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:55:54.777327   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:55:54.777500   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:55:54.777669   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:55:54.777814   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:55:54.777843   36539 client.go:168] LocalClient.Create starting
	I1204 23:55:54.777875   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:55:54.777915   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:55:54.777928   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:55:54.778006   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:55:54.778033   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:55:54.778049   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:55:54.778066   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:55:54.778079   36539 main.go:141] libmachine: (ha-168758) Calling .PreCreateCheck
	I1204 23:55:54.778456   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:55:54.778813   36539 main.go:141] libmachine: Creating machine...
	I1204 23:55:54.778825   36539 main.go:141] libmachine: (ha-168758) Calling .Create
	I1204 23:55:54.778947   36539 main.go:141] libmachine: (ha-168758) Creating KVM machine...
	I1204 23:55:54.780063   36539 main.go:141] libmachine: (ha-168758) DBG | found existing default KVM network
	I1204 23:55:54.780869   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:54.780727   36562 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1204 23:55:54.780949   36539 main.go:141] libmachine: (ha-168758) DBG | created network xml: 
	I1204 23:55:54.780981   36539 main.go:141] libmachine: (ha-168758) DBG | <network>
	I1204 23:55:54.780994   36539 main.go:141] libmachine: (ha-168758) DBG |   <name>mk-ha-168758</name>
	I1204 23:55:54.781002   36539 main.go:141] libmachine: (ha-168758) DBG |   <dns enable='no'/>
	I1204 23:55:54.781007   36539 main.go:141] libmachine: (ha-168758) DBG |   
	I1204 23:55:54.781012   36539 main.go:141] libmachine: (ha-168758) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 23:55:54.781017   36539 main.go:141] libmachine: (ha-168758) DBG |     <dhcp>
	I1204 23:55:54.781022   36539 main.go:141] libmachine: (ha-168758) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 23:55:54.781030   36539 main.go:141] libmachine: (ha-168758) DBG |     </dhcp>
	I1204 23:55:54.781067   36539 main.go:141] libmachine: (ha-168758) DBG |   </ip>
	I1204 23:55:54.781085   36539 main.go:141] libmachine: (ha-168758) DBG |   
	I1204 23:55:54.781094   36539 main.go:141] libmachine: (ha-168758) DBG | </network>
	I1204 23:55:54.781107   36539 main.go:141] libmachine: (ha-168758) DBG | 
	I1204 23:55:54.786047   36539 main.go:141] libmachine: (ha-168758) DBG | trying to create private KVM network mk-ha-168758 192.168.39.0/24...
	I1204 23:55:54.850461   36539 main.go:141] libmachine: (ha-168758) DBG | private KVM network mk-ha-168758 192.168.39.0/24 created
	I1204 23:55:54.850492   36539 main.go:141] libmachine: (ha-168758) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 ...
	I1204 23:55:54.850520   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:54.850437   36562 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:54.850537   36539 main.go:141] libmachine: (ha-168758) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:55:54.850557   36539 main.go:141] libmachine: (ha-168758) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:55:55.110330   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.110147   36562 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa...
	I1204 23:55:55.305236   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.305108   36562 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/ha-168758.rawdisk...
	I1204 23:55:55.305287   36539 main.go:141] libmachine: (ha-168758) DBG | Writing magic tar header
	I1204 23:55:55.305304   36539 main.go:141] libmachine: (ha-168758) DBG | Writing SSH key tar header
	I1204 23:55:55.305328   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.305248   36562 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 ...
	I1204 23:55:55.305350   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758
	I1204 23:55:55.305432   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 (perms=drwx------)
	I1204 23:55:55.305462   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:55:55.305478   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:55:55.305493   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:55:55.305502   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:55:55.305511   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:55:55.305522   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:55:55.305536   36539 main.go:141] libmachine: (ha-168758) Creating domain...
	I1204 23:55:55.305552   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:55.305567   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:55:55.305572   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:55:55.305581   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:55:55.305588   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home
	I1204 23:55:55.305594   36539 main.go:141] libmachine: (ha-168758) DBG | Skipping /home - not owner
	I1204 23:55:55.306542   36539 main.go:141] libmachine: (ha-168758) define libvirt domain using xml: 
	I1204 23:55:55.306560   36539 main.go:141] libmachine: (ha-168758) <domain type='kvm'>
	I1204 23:55:55.306569   36539 main.go:141] libmachine: (ha-168758)   <name>ha-168758</name>
	I1204 23:55:55.306577   36539 main.go:141] libmachine: (ha-168758)   <memory unit='MiB'>2200</memory>
	I1204 23:55:55.306585   36539 main.go:141] libmachine: (ha-168758)   <vcpu>2</vcpu>
	I1204 23:55:55.306593   36539 main.go:141] libmachine: (ha-168758)   <features>
	I1204 23:55:55.306598   36539 main.go:141] libmachine: (ha-168758)     <acpi/>
	I1204 23:55:55.306605   36539 main.go:141] libmachine: (ha-168758)     <apic/>
	I1204 23:55:55.306610   36539 main.go:141] libmachine: (ha-168758)     <pae/>
	I1204 23:55:55.306624   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.306651   36539 main.go:141] libmachine: (ha-168758)   </features>
	I1204 23:55:55.306675   36539 main.go:141] libmachine: (ha-168758)   <cpu mode='host-passthrough'>
	I1204 23:55:55.306682   36539 main.go:141] libmachine: (ha-168758)   
	I1204 23:55:55.306690   36539 main.go:141] libmachine: (ha-168758)   </cpu>
	I1204 23:55:55.306700   36539 main.go:141] libmachine: (ha-168758)   <os>
	I1204 23:55:55.306707   36539 main.go:141] libmachine: (ha-168758)     <type>hvm</type>
	I1204 23:55:55.306714   36539 main.go:141] libmachine: (ha-168758)     <boot dev='cdrom'/>
	I1204 23:55:55.306724   36539 main.go:141] libmachine: (ha-168758)     <boot dev='hd'/>
	I1204 23:55:55.306733   36539 main.go:141] libmachine: (ha-168758)     <bootmenu enable='no'/>
	I1204 23:55:55.306742   36539 main.go:141] libmachine: (ha-168758)   </os>
	I1204 23:55:55.306751   36539 main.go:141] libmachine: (ha-168758)   <devices>
	I1204 23:55:55.306763   36539 main.go:141] libmachine: (ha-168758)     <disk type='file' device='cdrom'>
	I1204 23:55:55.306773   36539 main.go:141] libmachine: (ha-168758)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/boot2docker.iso'/>
	I1204 23:55:55.306798   36539 main.go:141] libmachine: (ha-168758)       <target dev='hdc' bus='scsi'/>
	I1204 23:55:55.306817   36539 main.go:141] libmachine: (ha-168758)       <readonly/>
	I1204 23:55:55.306827   36539 main.go:141] libmachine: (ha-168758)     </disk>
	I1204 23:55:55.306836   36539 main.go:141] libmachine: (ha-168758)     <disk type='file' device='disk'>
	I1204 23:55:55.306848   36539 main.go:141] libmachine: (ha-168758)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:55:55.306863   36539 main.go:141] libmachine: (ha-168758)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/ha-168758.rawdisk'/>
	I1204 23:55:55.306877   36539 main.go:141] libmachine: (ha-168758)       <target dev='hda' bus='virtio'/>
	I1204 23:55:55.306888   36539 main.go:141] libmachine: (ha-168758)     </disk>
	I1204 23:55:55.306895   36539 main.go:141] libmachine: (ha-168758)     <interface type='network'>
	I1204 23:55:55.306907   36539 main.go:141] libmachine: (ha-168758)       <source network='mk-ha-168758'/>
	I1204 23:55:55.306917   36539 main.go:141] libmachine: (ha-168758)       <model type='virtio'/>
	I1204 23:55:55.306926   36539 main.go:141] libmachine: (ha-168758)     </interface>
	I1204 23:55:55.306935   36539 main.go:141] libmachine: (ha-168758)     <interface type='network'>
	I1204 23:55:55.306944   36539 main.go:141] libmachine: (ha-168758)       <source network='default'/>
	I1204 23:55:55.306958   36539 main.go:141] libmachine: (ha-168758)       <model type='virtio'/>
	I1204 23:55:55.306969   36539 main.go:141] libmachine: (ha-168758)     </interface>
	I1204 23:55:55.306980   36539 main.go:141] libmachine: (ha-168758)     <serial type='pty'>
	I1204 23:55:55.306986   36539 main.go:141] libmachine: (ha-168758)       <target port='0'/>
	I1204 23:55:55.306993   36539 main.go:141] libmachine: (ha-168758)     </serial>
	I1204 23:55:55.307001   36539 main.go:141] libmachine: (ha-168758)     <console type='pty'>
	I1204 23:55:55.307011   36539 main.go:141] libmachine: (ha-168758)       <target type='serial' port='0'/>
	I1204 23:55:55.307022   36539 main.go:141] libmachine: (ha-168758)     </console>
	I1204 23:55:55.307036   36539 main.go:141] libmachine: (ha-168758)     <rng model='virtio'>
	I1204 23:55:55.307049   36539 main.go:141] libmachine: (ha-168758)       <backend model='random'>/dev/random</backend>
	I1204 23:55:55.307058   36539 main.go:141] libmachine: (ha-168758)     </rng>
	I1204 23:55:55.307065   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.307074   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.307081   36539 main.go:141] libmachine: (ha-168758)   </devices>
	I1204 23:55:55.307091   36539 main.go:141] libmachine: (ha-168758) </domain>
	I1204 23:55:55.307100   36539 main.go:141] libmachine: (ha-168758) 
	I1204 23:55:55.311286   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:cd:a8:a7 in network default
	I1204 23:55:55.311801   36539 main.go:141] libmachine: (ha-168758) Ensuring networks are active...
	I1204 23:55:55.311821   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:55.312494   36539 main.go:141] libmachine: (ha-168758) Ensuring network default is active
	I1204 23:55:55.312741   36539 main.go:141] libmachine: (ha-168758) Ensuring network mk-ha-168758 is active
	I1204 23:55:55.313181   36539 main.go:141] libmachine: (ha-168758) Getting domain xml...
	I1204 23:55:55.313756   36539 main.go:141] libmachine: (ha-168758) Creating domain...
	I1204 23:55:56.499396   36539 main.go:141] libmachine: (ha-168758) Waiting to get IP...
	I1204 23:55:56.500076   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:56.500499   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:56.500544   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:56.500496   36562 retry.go:31] will retry after 269.735879ms: waiting for machine to come up
	I1204 23:55:56.772123   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:56.772694   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:56.772717   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:56.772654   36562 retry.go:31] will retry after 276.504012ms: waiting for machine to come up
	I1204 23:55:57.051105   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:57.051610   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:57.051639   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:57.051569   36562 retry.go:31] will retry after 401.653592ms: waiting for machine to come up
	I1204 23:55:57.455271   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:57.455618   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:57.455645   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:57.455573   36562 retry.go:31] will retry after 574.501264ms: waiting for machine to come up
	I1204 23:55:58.031181   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:58.031549   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:58.031571   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:58.031508   36562 retry.go:31] will retry after 481.845874ms: waiting for machine to come up
	I1204 23:55:58.515212   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:58.515533   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:58.515555   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:58.515482   36562 retry.go:31] will retry after 665.524951ms: waiting for machine to come up
	I1204 23:55:59.182393   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:59.182797   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:59.182828   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:59.182761   36562 retry.go:31] will retry after 904.131177ms: waiting for machine to come up
	I1204 23:56:00.088207   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:00.088652   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:00.088679   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:00.088622   36562 retry.go:31] will retry after 1.17453888s: waiting for machine to come up
	I1204 23:56:01.265172   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:01.265600   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:01.265628   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:01.265549   36562 retry.go:31] will retry after 1.695149003s: waiting for machine to come up
	I1204 23:56:02.963009   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:02.963469   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:02.963496   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:02.963425   36562 retry.go:31] will retry after 1.502018093s: waiting for machine to come up
	I1204 23:56:04.467087   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:04.467517   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:04.467540   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:04.467461   36562 retry.go:31] will retry after 2.001332834s: waiting for machine to come up
	I1204 23:56:06.470836   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:06.471294   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:06.471318   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:06.471260   36562 retry.go:31] will retry after 3.491074143s: waiting for machine to come up
	I1204 23:56:09.963576   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:09.964086   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:09.964117   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:09.964017   36562 retry.go:31] will retry after 3.362007605s: waiting for machine to come up
	I1204 23:56:13.329557   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:13.329894   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:13.329919   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:13.329871   36562 retry.go:31] will retry after 3.508639528s: waiting for machine to come up
	I1204 23:56:16.841192   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.841632   36539 main.go:141] libmachine: (ha-168758) Found IP for machine: 192.168.39.246
	I1204 23:56:16.841646   36539 main.go:141] libmachine: (ha-168758) Reserving static IP address...
	I1204 23:56:16.841659   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has current primary IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.842065   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find host DHCP lease matching {name: "ha-168758", mac: "52:54:00:e5:a0:85", ip: "192.168.39.246"} in network mk-ha-168758
	I1204 23:56:16.911192   36539 main.go:141] libmachine: (ha-168758) DBG | Getting to WaitForSSH function...
	I1204 23:56:16.911218   36539 main.go:141] libmachine: (ha-168758) Reserved static IP address: 192.168.39.246
	I1204 23:56:16.911227   36539 main.go:141] libmachine: (ha-168758) Waiting for SSH to be available...
	I1204 23:56:16.913359   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.913692   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:16.913721   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.913806   36539 main.go:141] libmachine: (ha-168758) DBG | Using SSH client type: external
	I1204 23:56:16.913831   36539 main.go:141] libmachine: (ha-168758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa (-rw-------)
	I1204 23:56:16.913874   36539 main.go:141] libmachine: (ha-168758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:56:16.913895   36539 main.go:141] libmachine: (ha-168758) DBG | About to run SSH command:
	I1204 23:56:16.913909   36539 main.go:141] libmachine: (ha-168758) DBG | exit 0
	I1204 23:56:17.042024   36539 main.go:141] libmachine: (ha-168758) DBG | SSH cmd err, output: <nil>: 
	I1204 23:56:17.042323   36539 main.go:141] libmachine: (ha-168758) KVM machine creation complete!
	I1204 23:56:17.042711   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:56:17.043239   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:17.043439   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:17.043673   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:56:17.043690   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:17.045108   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:56:17.045123   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:56:17.045129   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:56:17.045137   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.047586   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.047979   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.048020   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.048171   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.048340   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.048499   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.048610   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.048758   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.048940   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.048952   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:56:17.161079   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:56:17.161103   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:56:17.161111   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.163996   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.164353   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.164381   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.164502   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.164678   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.164819   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.164953   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.165084   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.165251   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.165266   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:56:17.278333   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:56:17.278413   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:56:17.278424   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:56:17.278431   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.278638   36539 buildroot.go:166] provisioning hostname "ha-168758"
	I1204 23:56:17.278665   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.278846   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.281326   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.281670   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.281694   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.281868   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.282066   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.282192   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.282295   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.282427   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.282619   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.282636   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758 && echo "ha-168758" | sudo tee /etc/hostname
	I1204 23:56:17.407414   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758
	
	I1204 23:56:17.407442   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.410140   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.410515   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.410537   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.410696   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.410903   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.411049   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.411179   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.411328   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.411496   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.411510   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:56:17.529969   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:56:17.530000   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:56:17.530033   36539 buildroot.go:174] setting up certificates
	I1204 23:56:17.530051   36539 provision.go:84] configureAuth start
	I1204 23:56:17.530060   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.530320   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:17.533080   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.533538   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.533566   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.533710   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.535938   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.536275   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.536304   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.536461   36539 provision.go:143] copyHostCerts
	I1204 23:56:17.536485   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:56:17.536522   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:56:17.536530   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:56:17.536603   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:56:17.536688   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:56:17.536713   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:56:17.536717   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:56:17.536740   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:56:17.536823   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:56:17.536843   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:56:17.536847   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:56:17.536869   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:56:17.536930   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758 san=[127.0.0.1 192.168.39.246 ha-168758 localhost minikube]
	I1204 23:56:17.711262   36539 provision.go:177] copyRemoteCerts
	I1204 23:56:17.711322   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:56:17.711342   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.714054   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.714497   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.714526   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.714715   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.714918   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.715121   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.715311   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:17.799635   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:56:17.799713   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:56:17.821716   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:56:17.821774   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 23:56:17.843183   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:56:17.843274   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:56:17.864534   36539 provision.go:87] duration metric: took 334.473115ms to configureAuth
	I1204 23:56:17.864557   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:56:17.864710   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:17.864770   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.867187   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.867580   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.867600   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.867756   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.867942   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.868106   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.868250   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.868399   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.868552   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.868568   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:56:18.094883   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:56:18.094906   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:56:18.094924   36539 main.go:141] libmachine: (ha-168758) Calling .GetURL
	I1204 23:56:18.096038   36539 main.go:141] libmachine: (ha-168758) DBG | Using libvirt version 6000000
	I1204 23:56:18.097970   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.098244   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.098281   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.098435   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:56:18.098456   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:56:18.098466   36539 client.go:171] duration metric: took 23.320611985s to LocalClient.Create
	I1204 23:56:18.098496   36539 start.go:167] duration metric: took 23.320681504s to libmachine.API.Create "ha-168758"
	I1204 23:56:18.098512   36539 start.go:293] postStartSetup for "ha-168758" (driver="kvm2")
	I1204 23:56:18.098543   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:56:18.098564   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.098810   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:56:18.098838   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.100934   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.101269   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.101300   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.101384   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.101557   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.101728   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.101868   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.187794   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:56:18.191682   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:56:18.191702   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:56:18.191762   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:56:18.191831   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:56:18.191844   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:56:18.191928   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:56:18.200599   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:56:18.221903   36539 start.go:296] duration metric: took 123.376466ms for postStartSetup
	I1204 23:56:18.221953   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:56:18.222479   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:18.225322   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.225603   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.225629   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.225822   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:18.226039   36539 start.go:128] duration metric: took 23.465963847s to createHost
	I1204 23:56:18.226063   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.227948   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.228242   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.228279   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.228403   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.228611   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.228757   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.228867   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.229008   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:18.229167   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:18.229181   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:56:18.342221   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356578.318371600
	
	I1204 23:56:18.342254   36539 fix.go:216] guest clock: 1733356578.318371600
	I1204 23:56:18.342262   36539 fix.go:229] Guest: 2024-12-04 23:56:18.3183716 +0000 UTC Remote: 2024-12-04 23:56:18.226051425 +0000 UTC m=+23.570737200 (delta=92.320175ms)
	I1204 23:56:18.342279   36539 fix.go:200] guest clock delta is within tolerance: 92.320175ms
	I1204 23:56:18.342284   36539 start.go:83] releasing machines lock for "ha-168758", held for 23.582292191s
	I1204 23:56:18.342302   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.342572   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:18.345220   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.345537   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.345566   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.345727   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346174   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346341   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346431   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:56:18.346466   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.346494   36539 ssh_runner.go:195] Run: cat /version.json
	I1204 23:56:18.346512   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.348977   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349188   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349347   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.349382   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349542   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.349563   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349567   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.349682   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.349723   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.349810   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.349858   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.349927   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.349985   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.350095   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.458516   36539 ssh_runner.go:195] Run: systemctl --version
	I1204 23:56:18.464121   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:56:18.616560   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:56:18.621973   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:56:18.622057   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:56:18.637008   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:56:18.637037   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:56:18.637115   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:56:18.652474   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:56:18.665368   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:56:18.665440   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:56:18.678285   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:56:18.691433   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:56:18.809520   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:56:18.976690   36539 docker.go:233] disabling docker service ...
	I1204 23:56:18.976760   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:56:18.990927   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:56:19.002936   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:56:19.109515   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:56:19.215051   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:56:19.228643   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:56:19.245611   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:56:19.245668   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.255147   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:56:19.255206   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.264826   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.274125   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.283553   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:56:19.293134   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.302638   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.318465   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.327811   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:56:19.336218   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:56:19.336258   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:56:19.347830   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:56:19.356252   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:56:19.465000   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:56:19.556969   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:56:19.557036   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:56:19.561493   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:56:19.561547   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:56:19.564952   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:56:19.603625   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:56:19.603715   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:56:19.633677   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:56:19.660728   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:56:19.661971   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:19.664503   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:19.664856   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:19.664875   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:19.665065   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:56:19.668903   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:56:19.681192   36539 kubeadm.go:883] updating cluster {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:56:19.681288   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:56:19.681341   36539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:56:19.712021   36539 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 23:56:19.712076   36539 ssh_runner.go:195] Run: which lz4
	I1204 23:56:19.715771   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1204 23:56:19.715868   36539 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 23:56:19.719604   36539 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 23:56:19.719630   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 23:56:20.924241   36539 crio.go:462] duration metric: took 1.208402192s to copy over tarball
	I1204 23:56:20.924314   36539 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 23:56:22.898491   36539 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.974144098s)
	I1204 23:56:22.898524   36539 crio.go:469] duration metric: took 1.974249374s to extract the tarball
	I1204 23:56:22.898533   36539 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 23:56:22.934474   36539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:56:22.976510   36539 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:56:22.976531   36539 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:56:22.976539   36539 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.2 crio true true} ...
	I1204 23:56:22.976632   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:56:22.976690   36539 ssh_runner.go:195] Run: crio config
	I1204 23:56:23.020705   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:56:23.020725   36539 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 23:56:23.020733   36539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:56:23.020757   36539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-168758 NodeName:ha-168758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:56:23.020887   36539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-168758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:56:23.020918   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:56:23.020968   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:56:23.036384   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:56:23.036487   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:56:23.036538   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:56:23.045918   36539 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:56:23.045983   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 23:56:23.054649   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 23:56:23.069725   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:56:23.085152   36539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 23:56:23.100420   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1204 23:56:23.115477   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:56:23.118939   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:56:23.130202   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:56:23.249727   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:56:23.265905   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.246
	I1204 23:56:23.265923   36539 certs.go:194] generating shared ca certs ...
	I1204 23:56:23.265936   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.266102   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:56:23.266137   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:56:23.266147   36539 certs.go:256] generating profile certs ...
	I1204 23:56:23.266191   36539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:56:23.266210   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt with IP's: []
	I1204 23:56:23.454121   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt ...
	I1204 23:56:23.454147   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt: {Name:mk99d8a201f862390e96b084b532473bf8d8aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.454316   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key ...
	I1204 23:56:23.454328   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key: {Name:mkfdc708535eba27b71f019b2247ebd8ebc65dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.454403   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a
	I1204 23:56:23.454418   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I1204 23:56:23.571650   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a ...
	I1204 23:56:23.571677   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a: {Name:mk2304c3cee7d491c51de92409bc18bd5dd4ae6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.571829   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a ...
	I1204 23:56:23.571841   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a: {Name:mkd75785818ce3fa6f4d0786b8ae41ba1de41627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.571908   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:56:23.572007   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:56:23.572063   36539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:56:23.572077   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt with IP's: []
	I1204 23:56:23.673268   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt ...
	I1204 23:56:23.673293   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt: {Name:mkd46a8ffbcb119ca858529996fc89416032f07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.673434   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key ...
	I1204 23:56:23.673444   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key: {Name:mk7fc42a81c367eaf7af5377c2050677c25e0a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.673507   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:56:23.673523   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:56:23.673534   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:56:23.673547   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:56:23.673561   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:56:23.673574   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:56:23.673586   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:56:23.673598   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:56:23.673642   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:56:23.673674   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:56:23.673683   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:56:23.673707   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:56:23.673729   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:56:23.673751   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:56:23.673787   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:56:23.673812   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.673825   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.673837   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:23.674359   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:56:23.698874   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:56:23.720936   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:56:23.742507   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:56:23.763840   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 23:56:23.787615   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:56:23.809303   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:56:23.830746   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:56:23.852712   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:56:23.874053   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:56:23.895302   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:56:23.916675   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:56:23.931879   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:56:23.937416   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:56:23.947727   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.951899   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.951951   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.957401   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:56:23.967356   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:56:23.977327   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.981332   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.981387   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.986550   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:56:23.996547   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:56:24.006567   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.010520   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.010574   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.015659   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:56:24.028404   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:56:24.032403   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:56:24.032453   36539 kubeadm.go:392] StartCluster: {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:56:24.032536   36539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:56:24.032593   36539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:56:24.081213   36539 cri.go:89] found id: ""
	I1204 23:56:24.081287   36539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:56:24.095063   36539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:56:24.104106   36539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:56:24.113190   36539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:56:24.113206   36539 kubeadm.go:157] found existing configuration files:
	
	I1204 23:56:24.113242   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:56:24.121657   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:56:24.121703   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:56:24.130638   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:56:24.139019   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:56:24.139075   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:56:24.147703   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:56:24.156160   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:56:24.156205   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:56:24.165072   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:56:24.173473   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:56:24.173525   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:56:24.182301   36539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 23:56:24.376697   36539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:56:34.579933   36539 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:56:34.580018   36539 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:56:34.580122   36539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:56:34.580249   36539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:56:34.580403   36539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:56:34.580494   36539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:56:34.582178   36539 out.go:235]   - Generating certificates and keys ...
	I1204 23:56:34.582281   36539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:56:34.582355   36539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:56:34.582417   36539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:56:34.582467   36539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:56:34.582518   36539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:56:34.582564   36539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:56:34.582636   36539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:56:34.582772   36539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-168758 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1204 23:56:34.582829   36539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:56:34.582931   36539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-168758 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1204 23:56:34.582990   36539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:56:34.583041   36539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:56:34.583090   36539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:56:34.583139   36539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:56:34.583188   36539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:56:34.583239   36539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:56:34.583290   36539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:56:34.583344   36539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:56:34.583394   36539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:56:34.583463   36539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:56:34.583529   36539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:56:34.585082   36539 out.go:235]   - Booting up control plane ...
	I1204 23:56:34.585162   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:56:34.585225   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:56:34.585285   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:56:34.585417   36539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:56:34.585527   36539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:56:34.585583   36539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:56:34.585716   36539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:56:34.585831   36539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:56:34.585888   36539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.206497ms
	I1204 23:56:34.585961   36539 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:56:34.586024   36539 kubeadm.go:310] [api-check] The API server is healthy after 5.989146948s
	I1204 23:56:34.586115   36539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:56:34.586292   36539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:56:34.586380   36539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:56:34.586600   36539 kubeadm.go:310] [mark-control-plane] Marking the node ha-168758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:56:34.586682   36539 kubeadm.go:310] [bootstrap-token] Using token: xmluo7.j7yx8f6chysik4ve
	I1204 23:56:34.588072   36539 out.go:235]   - Configuring RBAC rules ...
	I1204 23:56:34.588170   36539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:56:34.588259   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:56:34.588422   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:56:34.588567   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:56:34.588682   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:56:34.588762   36539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:56:34.588863   36539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:56:34.588922   36539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:56:34.588962   36539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:56:34.588968   36539 kubeadm.go:310] 
	I1204 23:56:34.589020   36539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:56:34.589026   36539 kubeadm.go:310] 
	I1204 23:56:34.589098   36539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:56:34.589104   36539 kubeadm.go:310] 
	I1204 23:56:34.589125   36539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:56:34.589179   36539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:56:34.589225   36539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:56:34.589231   36539 kubeadm.go:310] 
	I1204 23:56:34.589295   36539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:56:34.589308   36539 kubeadm.go:310] 
	I1204 23:56:34.589352   36539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:56:34.589359   36539 kubeadm.go:310] 
	I1204 23:56:34.589401   36539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:56:34.589475   36539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:56:34.589570   36539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:56:34.589580   36539 kubeadm.go:310] 
	I1204 23:56:34.589656   36539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:56:34.589727   36539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:56:34.589735   36539 kubeadm.go:310] 
	I1204 23:56:34.589810   36539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xmluo7.j7yx8f6chysik4ve \
	I1204 23:56:34.589952   36539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c \
	I1204 23:56:34.589986   36539 kubeadm.go:310] 	--control-plane 
	I1204 23:56:34.589993   36539 kubeadm.go:310] 
	I1204 23:56:34.590071   36539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:56:34.590078   36539 kubeadm.go:310] 
	I1204 23:56:34.590151   36539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xmluo7.j7yx8f6chysik4ve \
	I1204 23:56:34.590255   36539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c 
	I1204 23:56:34.590265   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:56:34.590270   36539 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 23:56:34.591672   36539 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 23:56:34.592995   36539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 23:56:34.598320   36539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 23:56:34.598340   36539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 23:56:34.616335   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 23:56:34.967246   36539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:56:34.967343   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:34.967389   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758 minikube.k8s.io/updated_at=2024_12_04T23_56_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=true
	I1204 23:56:35.009341   36539 ops.go:34] apiserver oom_adj: -16
	I1204 23:56:35.191577   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:35.691603   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:36.192181   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:36.691747   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.192577   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.691663   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.837937   36539 kubeadm.go:1113] duration metric: took 2.870656691s to wait for elevateKubeSystemPrivileges
	I1204 23:56:37.837999   36539 kubeadm.go:394] duration metric: took 13.80554404s to StartCluster
	I1204 23:56:37.838021   36539 settings.go:142] acquiring lock: {Name:mkdafc0119272d33e86aa3cd16389b126b5518c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:37.838113   36539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:56:37.838945   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/kubeconfig: {Name:mkdcd2310e2ff57e3c85e12df202da266115cf1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:37.839186   36539 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:56:37.839215   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:56:37.839197   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:56:37.839216   36539 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 23:56:37.839321   36539 addons.go:69] Setting storage-provisioner=true in profile "ha-168758"
	I1204 23:56:37.839348   36539 addons.go:234] Setting addon storage-provisioner=true in "ha-168758"
	I1204 23:56:37.839351   36539 addons.go:69] Setting default-storageclass=true in profile "ha-168758"
	I1204 23:56:37.839375   36539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-168758"
	I1204 23:56:37.839380   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:37.839384   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:56:37.839759   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.839800   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.839824   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.839863   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.854550   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1204 23:56:37.854839   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1204 23:56:37.854964   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.855259   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.855493   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.855517   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.855736   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.855762   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.855853   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.855980   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.856068   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.856674   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.856721   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.858200   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:56:37.858527   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 23:56:37.859020   36539 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 23:56:37.859271   36539 addons.go:234] Setting addon default-storageclass=true in "ha-168758"
	I1204 23:56:37.859308   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:56:37.859664   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.859703   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.872166   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I1204 23:56:37.872824   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.873354   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.873383   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.873694   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.873898   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.873955   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
	I1204 23:56:37.874272   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.874700   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.874726   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.875097   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.875640   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:37.875644   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.875720   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.877494   36539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:56:37.878808   36539 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:56:37.878822   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:56:37.878836   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:37.882048   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.882510   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:37.882542   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.882617   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:37.882791   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:37.882930   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:37.883065   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:37.890703   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1204 23:56:37.891111   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.891600   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.891624   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.891905   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.892126   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.893659   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:37.893845   36539 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:56:37.893861   36539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:56:37.893876   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:37.896603   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.897013   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:37.897035   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.897191   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:37.897338   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:37.897461   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:37.897562   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:37.981170   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:56:38.100674   36539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:56:38.124529   36539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:56:38.478625   36539 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 23:56:38.478692   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.478710   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.478984   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.479004   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.479014   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.479022   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.479240   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.479255   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.479326   36539 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 23:56:38.479347   36539 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 23:56:38.479442   36539 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1204 23:56:38.479452   36539 round_trippers.go:469] Request Headers:
	I1204 23:56:38.479462   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:56:38.479469   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:56:38.494926   36539 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1204 23:56:38.495499   36539 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1204 23:56:38.495513   36539 round_trippers.go:469] Request Headers:
	I1204 23:56:38.495520   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:56:38.495529   36539 round_trippers.go:473]     Content-Type: application/json
	I1204 23:56:38.495532   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:56:38.504036   36539 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1204 23:56:38.504185   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.504196   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.504490   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.504510   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.504543   36539 main.go:141] libmachine: (ha-168758) DBG | Closing plugin on server side
	I1204 23:56:38.728533   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.728559   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.728849   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.728876   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.728885   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.728893   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.729129   36539 main.go:141] libmachine: (ha-168758) DBG | Closing plugin on server side
	I1204 23:56:38.729202   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.729219   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.730946   36539 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1204 23:56:38.732198   36539 addons.go:510] duration metric: took 892.982269ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1204 23:56:38.732240   36539 start.go:246] waiting for cluster config update ...
	I1204 23:56:38.732263   36539 start.go:255] writing updated cluster config ...
	I1204 23:56:38.733842   36539 out.go:201] 
	I1204 23:56:38.735235   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:38.735344   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:38.736922   36539 out.go:177] * Starting "ha-168758-m02" control-plane node in "ha-168758" cluster
	I1204 23:56:38.738175   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:56:38.738201   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:56:38.738327   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:56:38.738348   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:56:38.738442   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:38.738692   36539 start.go:360] acquireMachinesLock for ha-168758-m02: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:56:38.738753   36539 start.go:364] duration metric: took 32.863µs to acquireMachinesLock for "ha-168758-m02"
	I1204 23:56:38.738778   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:56:38.738869   36539 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1204 23:56:38.740536   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:56:38.740652   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:38.740698   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:38.755586   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I1204 23:56:38.756099   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:38.756624   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:38.756663   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:38.757003   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:38.757205   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:56:38.757343   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:56:38.757484   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:56:38.757510   36539 client.go:168] LocalClient.Create starting
	I1204 23:56:38.757536   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:56:38.757571   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:56:38.757584   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:56:38.757629   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:56:38.757661   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:56:38.757671   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:56:38.757684   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:56:38.757692   36539 main.go:141] libmachine: (ha-168758-m02) Calling .PreCreateCheck
	I1204 23:56:38.757862   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:56:38.758275   36539 main.go:141] libmachine: Creating machine...
	I1204 23:56:38.758289   36539 main.go:141] libmachine: (ha-168758-m02) Calling .Create
	I1204 23:56:38.758443   36539 main.go:141] libmachine: (ha-168758-m02) Creating KVM machine...
	I1204 23:56:38.759771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found existing default KVM network
	I1204 23:56:38.759905   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found existing private KVM network mk-ha-168758
	I1204 23:56:38.760010   36539 main.go:141] libmachine: (ha-168758-m02) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 ...
	I1204 23:56:38.760033   36539 main.go:141] libmachine: (ha-168758-m02) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:56:38.760144   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:38.760017   36906 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:56:38.760207   36539 main.go:141] libmachine: (ha-168758-m02) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:56:39.002966   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.002856   36906 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa...
	I1204 23:56:39.068744   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.068644   36906 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/ha-168758-m02.rawdisk...
	I1204 23:56:39.068771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Writing magic tar header
	I1204 23:56:39.068781   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Writing SSH key tar header
	I1204 23:56:39.068788   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.068754   36906 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 ...
	I1204 23:56:39.068880   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02
	I1204 23:56:39.068913   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 (perms=drwx------)
	I1204 23:56:39.068925   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:56:39.068946   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:56:39.068960   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:56:39.068973   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:56:39.068987   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:56:39.068997   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:56:39.069012   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:56:39.069021   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home
	I1204 23:56:39.069037   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Skipping /home - not owner
	I1204 23:56:39.069073   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:56:39.069102   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:56:39.069123   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:56:39.069137   36539 main.go:141] libmachine: (ha-168758-m02) Creating domain...
	I1204 23:56:39.070045   36539 main.go:141] libmachine: (ha-168758-m02) define libvirt domain using xml: 
	I1204 23:56:39.070076   36539 main.go:141] libmachine: (ha-168758-m02) <domain type='kvm'>
	I1204 23:56:39.070100   36539 main.go:141] libmachine: (ha-168758-m02)   <name>ha-168758-m02</name>
	I1204 23:56:39.070118   36539 main.go:141] libmachine: (ha-168758-m02)   <memory unit='MiB'>2200</memory>
	I1204 23:56:39.070139   36539 main.go:141] libmachine: (ha-168758-m02)   <vcpu>2</vcpu>
	I1204 23:56:39.070157   36539 main.go:141] libmachine: (ha-168758-m02)   <features>
	I1204 23:56:39.070172   36539 main.go:141] libmachine: (ha-168758-m02)     <acpi/>
	I1204 23:56:39.070188   36539 main.go:141] libmachine: (ha-168758-m02)     <apic/>
	I1204 23:56:39.070199   36539 main.go:141] libmachine: (ha-168758-m02)     <pae/>
	I1204 23:56:39.070210   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070233   36539 main.go:141] libmachine: (ha-168758-m02)   </features>
	I1204 23:56:39.070244   36539 main.go:141] libmachine: (ha-168758-m02)   <cpu mode='host-passthrough'>
	I1204 23:56:39.070251   36539 main.go:141] libmachine: (ha-168758-m02)   
	I1204 23:56:39.070266   36539 main.go:141] libmachine: (ha-168758-m02)   </cpu>
	I1204 23:56:39.070275   36539 main.go:141] libmachine: (ha-168758-m02)   <os>
	I1204 23:56:39.070282   36539 main.go:141] libmachine: (ha-168758-m02)     <type>hvm</type>
	I1204 23:56:39.070293   36539 main.go:141] libmachine: (ha-168758-m02)     <boot dev='cdrom'/>
	I1204 23:56:39.070303   36539 main.go:141] libmachine: (ha-168758-m02)     <boot dev='hd'/>
	I1204 23:56:39.070312   36539 main.go:141] libmachine: (ha-168758-m02)     <bootmenu enable='no'/>
	I1204 23:56:39.070322   36539 main.go:141] libmachine: (ha-168758-m02)   </os>
	I1204 23:56:39.070334   36539 main.go:141] libmachine: (ha-168758-m02)   <devices>
	I1204 23:56:39.070343   36539 main.go:141] libmachine: (ha-168758-m02)     <disk type='file' device='cdrom'>
	I1204 23:56:39.070361   36539 main.go:141] libmachine: (ha-168758-m02)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/boot2docker.iso'/>
	I1204 23:56:39.070369   36539 main.go:141] libmachine: (ha-168758-m02)       <target dev='hdc' bus='scsi'/>
	I1204 23:56:39.070376   36539 main.go:141] libmachine: (ha-168758-m02)       <readonly/>
	I1204 23:56:39.070387   36539 main.go:141] libmachine: (ha-168758-m02)     </disk>
	I1204 23:56:39.070396   36539 main.go:141] libmachine: (ha-168758-m02)     <disk type='file' device='disk'>
	I1204 23:56:39.070401   36539 main.go:141] libmachine: (ha-168758-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:56:39.070412   36539 main.go:141] libmachine: (ha-168758-m02)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/ha-168758-m02.rawdisk'/>
	I1204 23:56:39.070427   36539 main.go:141] libmachine: (ha-168758-m02)       <target dev='hda' bus='virtio'/>
	I1204 23:56:39.070435   36539 main.go:141] libmachine: (ha-168758-m02)     </disk>
	I1204 23:56:39.070439   36539 main.go:141] libmachine: (ha-168758-m02)     <interface type='network'>
	I1204 23:56:39.070447   36539 main.go:141] libmachine: (ha-168758-m02)       <source network='mk-ha-168758'/>
	I1204 23:56:39.070452   36539 main.go:141] libmachine: (ha-168758-m02)       <model type='virtio'/>
	I1204 23:56:39.070459   36539 main.go:141] libmachine: (ha-168758-m02)     </interface>
	I1204 23:56:39.070463   36539 main.go:141] libmachine: (ha-168758-m02)     <interface type='network'>
	I1204 23:56:39.070468   36539 main.go:141] libmachine: (ha-168758-m02)       <source network='default'/>
	I1204 23:56:39.070475   36539 main.go:141] libmachine: (ha-168758-m02)       <model type='virtio'/>
	I1204 23:56:39.070479   36539 main.go:141] libmachine: (ha-168758-m02)     </interface>
	I1204 23:56:39.070483   36539 main.go:141] libmachine: (ha-168758-m02)     <serial type='pty'>
	I1204 23:56:39.070500   36539 main.go:141] libmachine: (ha-168758-m02)       <target port='0'/>
	I1204 23:56:39.070515   36539 main.go:141] libmachine: (ha-168758-m02)     </serial>
	I1204 23:56:39.070526   36539 main.go:141] libmachine: (ha-168758-m02)     <console type='pty'>
	I1204 23:56:39.070533   36539 main.go:141] libmachine: (ha-168758-m02)       <target type='serial' port='0'/>
	I1204 23:56:39.070539   36539 main.go:141] libmachine: (ha-168758-m02)     </console>
	I1204 23:56:39.070547   36539 main.go:141] libmachine: (ha-168758-m02)     <rng model='virtio'>
	I1204 23:56:39.070561   36539 main.go:141] libmachine: (ha-168758-m02)       <backend model='random'>/dev/random</backend>
	I1204 23:56:39.070568   36539 main.go:141] libmachine: (ha-168758-m02)     </rng>
	I1204 23:56:39.070579   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070586   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070604   36539 main.go:141] libmachine: (ha-168758-m02)   </devices>
	I1204 23:56:39.070621   36539 main.go:141] libmachine: (ha-168758-m02) </domain>
	I1204 23:56:39.070635   36539 main.go:141] libmachine: (ha-168758-m02) 
	I1204 23:56:39.077113   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:a2:74:ea in network default
	I1204 23:56:39.077678   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring networks are active...
	I1204 23:56:39.077701   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:39.078445   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring network default is active
	I1204 23:56:39.078865   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring network mk-ha-168758 is active
	I1204 23:56:39.079237   36539 main.go:141] libmachine: (ha-168758-m02) Getting domain xml...
	I1204 23:56:39.079930   36539 main.go:141] libmachine: (ha-168758-m02) Creating domain...
	I1204 23:56:40.265017   36539 main.go:141] libmachine: (ha-168758-m02) Waiting to get IP...
	I1204 23:56:40.265754   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.266145   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.266182   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.266132   36906 retry.go:31] will retry after 272.630461ms: waiting for machine to come up
	I1204 23:56:40.540436   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.540778   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.540807   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.540731   36906 retry.go:31] will retry after 280.679773ms: waiting for machine to come up
	I1204 23:56:40.823193   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.823606   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.823628   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.823550   36906 retry.go:31] will retry after 428.494355ms: waiting for machine to come up
	I1204 23:56:41.253121   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:41.253634   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:41.253657   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:41.253584   36906 retry.go:31] will retry after 443.923324ms: waiting for machine to come up
	I1204 23:56:41.698819   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:41.699183   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:41.699213   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:41.699129   36906 retry.go:31] will retry after 522.039893ms: waiting for machine to come up
	I1204 23:56:42.222513   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:42.223045   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:42.223075   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:42.222991   36906 retry.go:31] will retry after 778.146913ms: waiting for machine to come up
	I1204 23:56:43.002368   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:43.002725   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:43.002748   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:43.002697   36906 retry.go:31] will retry after 985.520994ms: waiting for machine to come up
	I1204 23:56:43.990537   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:43.991046   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:43.991072   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:43.991013   36906 retry.go:31] will retry after 1.065554597s: waiting for machine to come up
	I1204 23:56:45.057549   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:45.057997   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:45.058038   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:45.057954   36906 retry.go:31] will retry after 1.75203222s: waiting for machine to come up
	I1204 23:56:46.812766   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:46.813189   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:46.813218   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:46.813128   36906 retry.go:31] will retry after 1.936174375s: waiting for machine to come up
	I1204 23:56:48.752161   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:48.752595   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:48.752615   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:48.752561   36906 retry.go:31] will retry after 2.006049391s: waiting for machine to come up
	I1204 23:56:50.760948   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:50.761423   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:50.761448   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:50.761364   36906 retry.go:31] will retry after 3.635507158s: waiting for machine to come up
	I1204 23:56:54.398466   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:54.398888   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:54.398919   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:54.398858   36906 retry.go:31] will retry after 3.280789577s: waiting for machine to come up
	I1204 23:56:57.683343   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:57.683888   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:57.683924   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:57.683859   36906 retry.go:31] will retry after 5.640187241s: waiting for machine to come up
	I1204 23:57:03.329060   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.329506   36539 main.go:141] libmachine: (ha-168758-m02) Found IP for machine: 192.168.39.70
	I1204 23:57:03.329530   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has current primary IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.329536   36539 main.go:141] libmachine: (ha-168758-m02) Reserving static IP address...
	I1204 23:57:03.329911   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find host DHCP lease matching {name: "ha-168758-m02", mac: "52:54:00:e2:30:30", ip: "192.168.39.70"} in network mk-ha-168758
	I1204 23:57:03.403859   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Getting to WaitForSSH function...
	I1204 23:57:03.403880   36539 main.go:141] libmachine: (ha-168758-m02) Reserved static IP address: 192.168.39.70
	I1204 23:57:03.403891   36539 main.go:141] libmachine: (ha-168758-m02) Waiting for SSH to be available...
	I1204 23:57:03.406504   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.407026   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.407061   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.407259   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using SSH client type: external
	I1204 23:57:03.407286   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa (-rw-------)
	I1204 23:57:03.407378   36539 main.go:141] libmachine: (ha-168758-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:57:03.407404   36539 main.go:141] libmachine: (ha-168758-m02) DBG | About to run SSH command:
	I1204 23:57:03.407413   36539 main.go:141] libmachine: (ha-168758-m02) DBG | exit 0
	I1204 23:57:03.534727   36539 main.go:141] libmachine: (ha-168758-m02) DBG | SSH cmd err, output: <nil>: 
	I1204 23:57:03.535017   36539 main.go:141] libmachine: (ha-168758-m02) KVM machine creation complete!
	I1204 23:57:03.535420   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:57:03.536018   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:03.536208   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:03.536363   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:57:03.536382   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetState
	I1204 23:57:03.537483   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:57:03.537500   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:57:03.537508   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:57:03.537516   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.539751   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.540081   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.540109   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.540281   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.540440   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.540587   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.540727   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.540880   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.541129   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.541145   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:57:03.649219   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:57:03.649249   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:57:03.649261   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.652199   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.652771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.652856   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.652885   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.653079   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.653247   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.653419   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.653613   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.653828   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.653842   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:57:03.763190   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:57:03.763346   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:57:03.763361   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:57:03.763371   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:03.763602   36539 buildroot.go:166] provisioning hostname "ha-168758-m02"
	I1204 23:57:03.763632   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:03.763818   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.766973   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.767390   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.767437   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.767603   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.767818   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.768055   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.768271   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.768440   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.768709   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.768726   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758-m02 && echo "ha-168758-m02" | sudo tee /etc/hostname
	I1204 23:57:03.892672   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758-m02
	
	I1204 23:57:03.892702   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.895144   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.895422   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.895442   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.895624   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.895781   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.895901   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.896009   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.896154   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.896362   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.896378   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:57:04.014565   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:57:04.014591   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:57:04.014605   36539 buildroot.go:174] setting up certificates
	I1204 23:57:04.014613   36539 provision.go:84] configureAuth start
	I1204 23:57:04.014620   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:04.014862   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.017595   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.017892   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.017919   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.018050   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.020121   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.020411   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.020432   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.020630   36539 provision.go:143] copyHostCerts
	I1204 23:57:04.020662   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:57:04.020697   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:57:04.020725   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:57:04.020805   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:57:04.020931   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:57:04.020973   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:57:04.020984   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:57:04.021027   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:57:04.021089   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:57:04.021109   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:57:04.021118   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:57:04.021154   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:57:04.021216   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758-m02 san=[127.0.0.1 192.168.39.70 ha-168758-m02 localhost minikube]
	I1204 23:57:04.229700   36539 provision.go:177] copyRemoteCerts
	I1204 23:57:04.229780   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:57:04.229810   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.233253   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.233650   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.233681   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.233956   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.234161   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.234318   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.234434   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.316064   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:57:04.316143   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:57:04.338512   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:57:04.338585   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:57:04.360119   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:57:04.360178   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:57:04.381996   36539 provision.go:87] duration metric: took 367.37237ms to configureAuth
	I1204 23:57:04.382022   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:57:04.382234   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:04.382318   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.384736   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.385043   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.385085   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.385263   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.385456   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.385634   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.385765   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.385893   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:04.386085   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:04.386102   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:57:04.615454   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:57:04.615480   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:57:04.615488   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetURL
	I1204 23:57:04.616607   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using libvirt version 6000000
	I1204 23:57:04.618773   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.619138   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.619174   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.619325   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:57:04.619342   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:57:04.619349   36539 client.go:171] duration metric: took 25.861833034s to LocalClient.Create
	I1204 23:57:04.619371   36539 start.go:167] duration metric: took 25.86188766s to libmachine.API.Create "ha-168758"
	I1204 23:57:04.619385   36539 start.go:293] postStartSetup for "ha-168758-m02" (driver="kvm2")
	I1204 23:57:04.619400   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:57:04.619421   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.619646   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:57:04.619675   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.622076   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.622449   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.622480   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.622622   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.622800   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.622964   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.623116   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.712567   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:57:04.716652   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:57:04.716681   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:57:04.716761   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:57:04.716858   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:57:04.716869   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:57:04.716975   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:57:04.726044   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:57:04.748709   36539 start.go:296] duration metric: took 129.292597ms for postStartSetup
	I1204 23:57:04.748769   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:57:04.749436   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.751961   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.752346   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.752376   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.752616   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:04.752793   36539 start.go:128] duration metric: took 26.013914076s to createHost
	I1204 23:57:04.752815   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.755319   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.755652   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.755682   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.755827   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.755971   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.756086   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.756189   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.756307   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:04.756479   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:04.756492   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:57:04.866426   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356624.824376765
	
	I1204 23:57:04.866448   36539 fix.go:216] guest clock: 1733356624.824376765
	I1204 23:57:04.866456   36539 fix.go:229] Guest: 2024-12-04 23:57:04.824376765 +0000 UTC Remote: 2024-12-04 23:57:04.752806033 +0000 UTC m=+70.097491807 (delta=71.570732ms)
	I1204 23:57:04.866470   36539 fix.go:200] guest clock delta is within tolerance: 71.570732ms
	I1204 23:57:04.866475   36539 start.go:83] releasing machines lock for "ha-168758-m02", held for 26.12770888s
	I1204 23:57:04.866493   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.866780   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.869289   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.869690   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.869723   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.871575   36539 out.go:177] * Found network options:
	I1204 23:57:04.872861   36539 out.go:177]   - NO_PROXY=192.168.39.246
	W1204 23:57:04.874174   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:57:04.874219   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.874854   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.875047   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.875134   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:57:04.875168   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	W1204 23:57:04.875274   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:57:04.875348   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:57:04.875364   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.877779   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878165   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878255   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.878300   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878462   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.878636   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.878704   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.878726   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878789   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.878869   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.878940   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.878980   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.879169   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.879295   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:05.115514   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:57:05.121578   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:57:05.121649   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:57:05.137141   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:57:05.137166   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:57:05.137249   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:57:05.153397   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:57:05.167629   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:57:05.167697   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:57:05.180991   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:57:05.194817   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:57:05.319862   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:57:05.480267   36539 docker.go:233] disabling docker service ...
	I1204 23:57:05.480352   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:57:05.494063   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:57:05.506104   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:57:05.622700   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:57:05.745424   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:57:05.759393   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:57:05.776697   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:57:05.776762   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.787070   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:57:05.787137   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.797124   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.807348   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.816991   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:57:05.826982   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.836497   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.852423   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.862453   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:57:05.871289   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:57:05.871338   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:57:05.883727   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:57:05.893241   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:06.013483   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:57:06.107092   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:57:06.107171   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:57:06.112530   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:57:06.112612   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:57:06.116309   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:57:06.157302   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:57:06.157371   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:57:06.184889   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:57:06.212833   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:57:06.214532   36539 out.go:177]   - env NO_PROXY=192.168.39.246
	I1204 23:57:06.215684   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:06.218318   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:06.218690   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:06.218715   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:06.218889   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:57:06.222805   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:57:06.234260   36539 mustload.go:65] Loading cluster: ha-168758
	I1204 23:57:06.234431   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:06.234663   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:06.234694   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:06.249086   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I1204 23:57:06.249474   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:06.249897   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:06.249916   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:06.250208   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:06.250399   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:57:06.251961   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:57:06.252244   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:06.252279   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:06.266555   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I1204 23:57:06.267012   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:06.267500   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:06.267519   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:06.267807   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:06.267965   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:57:06.268109   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.70
	I1204 23:57:06.268124   36539 certs.go:194] generating shared ca certs ...
	I1204 23:57:06.268142   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.268293   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:57:06.268346   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:57:06.268359   36539 certs.go:256] generating profile certs ...
	I1204 23:57:06.268451   36539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:57:06.268485   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d
	I1204 23:57:06.268506   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.254]
	I1204 23:57:06.433427   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d ...
	I1204 23:57:06.433455   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d: {Name:mk9b5ab41b84365c7c87c13c8ccd0d5c3db76aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.433623   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d ...
	I1204 23:57:06.433643   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d: {Name:mk61c76c74eaed61a46e313e778d5b43d1ba813c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.433739   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:57:06.433896   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:57:06.434089   36539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:57:06.434105   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:57:06.434120   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:57:06.434140   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:57:06.434158   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:57:06.434176   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:57:06.434192   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:57:06.434208   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:57:06.434229   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:57:06.434295   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:57:06.434333   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:57:06.434347   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:57:06.434398   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:57:06.434429   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:57:06.434462   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:57:06.434512   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:57:06.434550   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:57:06.434571   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.434596   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:57:06.434632   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:57:06.437677   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:06.438091   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:57:06.438121   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:06.438323   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:57:06.438484   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:57:06.438627   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:57:06.438715   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:57:06.518370   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 23:57:06.523710   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 23:57:06.534968   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 23:57:06.539161   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1204 23:57:06.549270   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 23:57:06.553348   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 23:57:06.562722   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 23:57:06.566572   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 23:57:06.575817   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 23:57:06.579621   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 23:57:06.588791   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 23:57:06.592406   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 23:57:06.601971   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:57:06.626102   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:57:06.647746   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:57:06.670354   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:57:06.700509   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 23:57:06.722552   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:57:06.745113   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:57:06.767304   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:57:06.789718   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:57:06.812073   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:57:06.834754   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:57:06.857779   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 23:57:06.872839   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1204 23:57:06.887702   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 23:57:06.903420   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 23:57:06.918686   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 23:57:06.933715   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 23:57:06.948744   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 23:57:06.963779   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:57:06.969267   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:57:06.978970   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.982989   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.983047   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.988382   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:57:06.997926   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:57:07.007572   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.011739   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.011796   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.017095   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:57:07.027066   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:57:07.037055   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.041161   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.041220   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.046426   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:57:07.056376   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:57:07.060388   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:57:07.060436   36539 kubeadm.go:934] updating node {m02 192.168.39.70 8443 v1.31.2 crio true true} ...
	I1204 23:57:07.060522   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:57:07.060547   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:57:07.060575   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:57:07.074631   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:57:07.074712   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:57:07.074777   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:57:07.084179   36539 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 23:57:07.084224   36539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 23:57:07.093203   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 23:57:07.093229   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:57:07.093277   36539 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1204 23:57:07.093292   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:57:07.093312   36539 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1204 23:57:07.097086   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 23:57:07.097113   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 23:57:08.011623   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:57:08.011699   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:57:08.017164   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 23:57:08.017200   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 23:57:08.175961   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:57:08.200946   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:57:08.201051   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:57:08.211652   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 23:57:08.211693   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 23:57:08.563181   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 23:57:08.571736   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1204 23:57:08.586849   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:57:08.601801   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 23:57:08.616859   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:57:08.620378   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:57:08.631234   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:08.746289   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:57:08.762474   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:57:08.762951   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:08.763010   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:08.777798   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I1204 23:57:08.778308   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:08.778784   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:08.778801   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:08.779085   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:08.779279   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:57:08.779432   36539 start.go:317] joinCluster: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:57:08.779528   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 23:57:08.779552   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:57:08.782464   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:08.782947   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:57:08.782974   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:08.783150   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:57:08.783306   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:57:08.783439   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:57:08.783593   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:57:08.959692   36539 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:08.959734   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lbtrqy.cl9b45dngbh3w53w --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m02 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443"
	I1204 23:57:30.700354   36539 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lbtrqy.cl9b45dngbh3w53w --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m02 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443": (21.740590437s)
	I1204 23:57:30.700399   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 23:57:31.260769   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758-m02 minikube.k8s.io/updated_at=2024_12_04T23_57_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=false
	I1204 23:57:31.388605   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-168758-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 23:57:31.507247   36539 start.go:319] duration metric: took 22.727809032s to joinCluster
	I1204 23:57:31.507330   36539 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:31.507657   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:31.510077   36539 out.go:177] * Verifying Kubernetes components...
	I1204 23:57:31.511382   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:31.807145   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:57:31.866005   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:57:31.866366   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 23:57:31.866446   36539 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I1204 23:57:31.866697   36539 node_ready.go:35] waiting up to 6m0s for node "ha-168758-m02" to be "Ready" ...
	I1204 23:57:31.866802   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:31.866815   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:31.866826   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:31.866832   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:31.894115   36539 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1204 23:57:32.367519   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:32.367541   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:32.367548   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:32.367552   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:32.372462   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:32.867267   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:32.867297   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:32.867307   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:32.867312   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:32.870593   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:33.367165   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:33.367188   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:33.367199   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:33.367204   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:33.371115   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:33.867446   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:33.867467   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:33.867490   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:33.867496   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:33.895642   36539 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1204 23:57:33.896043   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:34.367204   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:34.367227   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:34.367236   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:34.367241   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:34.370493   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:34.867119   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:34.867142   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:34.867150   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:34.867153   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:34.871875   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:35.367013   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:35.367043   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:35.367054   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:35.367059   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:35.369963   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:35.867691   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:35.867711   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:35.867718   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:35.867722   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:35.873237   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:57:36.366914   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:36.366938   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:36.366946   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:36.366957   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:36.369646   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:36.370291   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:36.867182   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:36.867206   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:36.867216   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:36.867221   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:36.870020   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:37.367042   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:37.367063   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:37.367071   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:37.367074   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:37.370425   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:37.867722   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:37.867746   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:37.867757   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:37.867762   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:37.870832   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:38.367880   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:38.367907   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:38.367919   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:38.367925   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:38.371699   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:38.374216   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:38.866862   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:38.866882   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:38.866889   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:38.866894   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:38.870442   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:39.367314   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:39.367335   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:39.367343   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:39.367347   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:39.370397   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:39.867083   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:39.867112   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:39.867124   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:39.867131   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:39.869764   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.367183   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:40.367204   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:40.367212   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:40.367216   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:40.370122   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.866955   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:40.866977   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:40.866985   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:40.866989   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:40.869893   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.870550   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:41.367838   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:41.367867   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:41.367877   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:41.367883   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:41.371252   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:41.867051   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:41.867074   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:41.867081   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:41.867086   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:41.869735   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:42.367874   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:42.367896   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:42.367904   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:42.367908   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:42.370905   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:42.867902   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:42.867923   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:42.867930   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:42.867936   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:42.871077   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:42.871630   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:43.367020   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:43.367041   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:43.367049   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:43.367053   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:43.369800   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:43.867917   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:43.867935   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:43.867943   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:43.867948   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:43.871058   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:44.366921   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:44.366944   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:44.366952   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:44.366956   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:44.369844   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:44.867849   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:44.867871   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:44.867878   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:44.867882   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:44.871084   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:45.367038   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:45.367064   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:45.367072   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:45.367076   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:45.370136   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:45.370824   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:45.867201   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:45.867222   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:45.867230   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:45.867234   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:45.870440   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:46.367088   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:46.367111   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:46.367122   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:46.367128   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:46.370360   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:46.867494   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:46.867515   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:46.867523   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:46.867527   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:46.870477   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:47.367351   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:47.367377   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:47.367388   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:47.367395   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:47.370421   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:47.371066   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:47.867351   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:47.867371   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:47.867376   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:47.867380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:47.870561   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:48.367177   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:48.367201   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:48.367208   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:48.367212   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:48.370109   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:48.867913   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:48.867940   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:48.867951   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:48.867957   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:48.870839   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:49.367731   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:49.367752   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:49.367763   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:49.367775   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:49.371251   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:49.371965   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:49.867061   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:49.867082   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:49.867091   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:49.867097   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:49.870692   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.367727   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:50.367748   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.367756   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.367760   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.371359   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.372130   36539 node_ready.go:49] node "ha-168758-m02" has status "Ready":"True"
	I1204 23:57:50.372146   36539 node_ready.go:38] duration metric: took 18.505433084s for node "ha-168758-m02" to be "Ready" ...
	I1204 23:57:50.372154   36539 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:57:50.372222   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:50.372232   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.372239   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.372245   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.376642   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:50.381842   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.381907   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rdbhr
	I1204 23:57:50.381916   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.381922   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.381927   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.384768   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.385355   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.385369   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.385375   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.385379   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.387687   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.388085   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.388100   36539 pod_ready.go:82] duration metric: took 6.236474ms for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.388109   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.388153   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgfvm
	I1204 23:57:50.388161   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.388167   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.388170   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.390871   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.391380   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.391394   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.391400   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.391405   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.394126   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.394638   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.394652   36539 pod_ready.go:82] duration metric: took 6.538378ms for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.394659   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.394701   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758
	I1204 23:57:50.394709   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.394715   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.394720   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.397059   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.397459   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.397471   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.397478   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.397482   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.399735   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.400125   36539 pod_ready.go:93] pod "etcd-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.400139   36539 pod_ready.go:82] duration metric: took 5.474765ms for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.400148   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.400188   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m02
	I1204 23:57:50.400196   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.400202   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.400206   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.402193   36539 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 23:57:50.402740   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:50.402752   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.402760   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.402765   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.404874   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.405354   36539 pod_ready.go:93] pod "etcd-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.405371   36539 pod_ready.go:82] duration metric: took 5.218099ms for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.405382   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.568781   36539 request.go:632] Waited for 163.320423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:57:50.568837   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:57:50.568845   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.568853   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.568860   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.572195   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.768439   36539 request.go:632] Waited for 195.374469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.768509   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.768529   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.768537   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.768542   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.771909   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.772411   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.772428   36539 pod_ready.go:82] duration metric: took 367.03986ms for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.772438   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.968760   36539 request.go:632] Waited for 196.264628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:57:50.968844   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:57:50.968862   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.968875   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.968883   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.972428   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.168517   36539 request.go:632] Waited for 195.379762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.168570   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.168575   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.168582   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.168586   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.171966   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.172839   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.172864   36539 pod_ready.go:82] duration metric: took 400.419668ms for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.172878   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.368601   36539 request.go:632] Waited for 195.644185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:57:51.368673   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:57:51.368681   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.368689   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.368696   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.371786   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.568796   36539 request.go:632] Waited for 196.387928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:51.568847   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:51.568853   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.568859   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.568862   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.572174   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.572929   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.572951   36539 pod_ready.go:82] duration metric: took 400.064616ms for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.572965   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.767794   36539 request.go:632] Waited for 194.737211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:57:51.767865   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:57:51.767871   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.767885   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.767892   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.771726   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.967706   36539 request.go:632] Waited for 195.275144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.967798   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.967806   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.967816   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.967825   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.971641   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.972143   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.972162   36539 pod_ready.go:82] duration metric: took 399.188822ms for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.972176   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.168279   36539 request.go:632] Waited for 196.037931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:57:52.168357   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:57:52.168364   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.168371   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.168380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.172095   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.368165   36539 request.go:632] Waited for 195.357162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:52.368231   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:52.368236   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.368244   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.368246   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.371117   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:52.371718   36539 pod_ready.go:93] pod "kube-proxy-s57v4" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:52.371734   36539 pod_ready.go:82] duration metric: took 399.551881ms for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.371744   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.568001   36539 request.go:632] Waited for 196.176636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:57:52.568074   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:57:52.568080   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.568088   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.568092   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.571383   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.768625   36539 request.go:632] Waited for 196.402632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:52.768686   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:52.768691   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.768702   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.768706   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.772096   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.772550   36539 pod_ready.go:93] pod "kube-proxy-vjhjm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:52.772567   36539 pod_ready.go:82] duration metric: took 400.817883ms for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.772577   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.968672   36539 request.go:632] Waited for 196.025622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:57:52.968728   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:57:52.968736   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.968747   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.968753   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.972192   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.168261   36539 request.go:632] Waited for 195.422832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:53.168312   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:53.168321   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.168329   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.168332   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.171764   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.172376   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:53.172393   36539 pod_ready.go:82] duration metric: took 399.810959ms for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.172403   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.368473   36539 request.go:632] Waited for 196.003907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:57:53.368564   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:57:53.368575   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.368586   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.368598   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.371851   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.567754   36539 request.go:632] Waited for 195.293783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:53.567819   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:53.567824   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.567831   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.567836   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.570770   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:53.571309   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:53.571333   36539 pod_ready.go:82] duration metric: took 398.922957ms for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.571348   36539 pod_ready.go:39] duration metric: took 3.199181638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:57:53.571367   36539 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:57:53.571414   36539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:57:53.585872   36539 api_server.go:72] duration metric: took 22.078506409s to wait for apiserver process to appear ...
	I1204 23:57:53.585898   36539 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:57:53.585919   36539 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1204 23:57:53.591379   36539 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1204 23:57:53.591435   36539 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1204 23:57:53.591440   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.591447   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.591452   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.592362   36539 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 23:57:53.592455   36539 api_server.go:141] control plane version: v1.31.2
	I1204 23:57:53.592472   36539 api_server.go:131] duration metric: took 6.568025ms to wait for apiserver health ...
	I1204 23:57:53.592481   36539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:57:53.767827   36539 request.go:632] Waited for 175.285413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:53.767882   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:53.767888   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.767897   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.767903   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.772373   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:53.776877   36539 system_pods.go:59] 17 kube-system pods found
	I1204 23:57:53.776902   36539 system_pods.go:61] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:57:53.776908   36539 system_pods.go:61] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:57:53.776912   36539 system_pods.go:61] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:57:53.776916   36539 system_pods.go:61] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:57:53.776920   36539 system_pods.go:61] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:57:53.776924   36539 system_pods.go:61] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:57:53.776927   36539 system_pods.go:61] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:57:53.776930   36539 system_pods.go:61] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:57:53.776934   36539 system_pods.go:61] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:57:53.776937   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:57:53.776941   36539 system_pods.go:61] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:57:53.776946   36539 system_pods.go:61] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:57:53.776949   36539 system_pods.go:61] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:57:53.776954   36539 system_pods.go:61] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:57:53.776956   36539 system_pods.go:61] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:57:53.776962   36539 system_pods.go:61] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:57:53.776965   36539 system_pods.go:61] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:57:53.776972   36539 system_pods.go:74] duration metric: took 184.481749ms to wait for pod list to return data ...
	I1204 23:57:53.777015   36539 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:57:53.968124   36539 request.go:632] Waited for 191.035378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:57:53.968194   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:57:53.968199   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.968206   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.968213   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.971592   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.971802   36539 default_sa.go:45] found service account: "default"
	I1204 23:57:53.971821   36539 default_sa.go:55] duration metric: took 194.795603ms for default service account to be created ...
	I1204 23:57:53.971829   36539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:57:54.168141   36539 request.go:632] Waited for 196.235523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:54.168201   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:54.168209   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:54.168224   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:54.168233   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:54.172658   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:54.176765   36539 system_pods.go:86] 17 kube-system pods found
	I1204 23:57:54.176792   36539 system_pods.go:89] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:57:54.176798   36539 system_pods.go:89] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:57:54.176803   36539 system_pods.go:89] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:57:54.176806   36539 system_pods.go:89] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:57:54.176809   36539 system_pods.go:89] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:57:54.176813   36539 system_pods.go:89] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:57:54.176816   36539 system_pods.go:89] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:57:54.176819   36539 system_pods.go:89] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:57:54.176823   36539 system_pods.go:89] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:57:54.176835   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:57:54.176843   36539 system_pods.go:89] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:57:54.176846   36539 system_pods.go:89] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:57:54.176850   36539 system_pods.go:89] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:57:54.176853   36539 system_pods.go:89] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:57:54.176856   36539 system_pods.go:89] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:57:54.176859   36539 system_pods.go:89] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:57:54.176862   36539 system_pods.go:89] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:57:54.176867   36539 system_pods.go:126] duration metric: took 205.031013ms to wait for k8s-apps to be running ...
	I1204 23:57:54.176876   36539 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:57:54.176915   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:57:54.192123   36539 system_svc.go:56] duration metric: took 15.235714ms WaitForService to wait for kubelet
	I1204 23:57:54.192153   36539 kubeadm.go:582] duration metric: took 22.684789622s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:57:54.192178   36539 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:57:54.368510   36539 request.go:632] Waited for 176.251249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1204 23:57:54.368569   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1204 23:57:54.368574   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:54.368581   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:54.368586   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:54.371935   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:54.372940   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:57:54.372962   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:57:54.372973   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:57:54.372977   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:57:54.372982   36539 node_conditions.go:105] duration metric: took 180.798569ms to run NodePressure ...
	I1204 23:57:54.372992   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:57:54.373014   36539 start.go:255] writing updated cluster config ...
	I1204 23:57:54.375210   36539 out.go:201] 
	I1204 23:57:54.376552   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:54.376634   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:54.378003   36539 out.go:177] * Starting "ha-168758-m03" control-plane node in "ha-168758" cluster
	I1204 23:57:54.379143   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:57:54.379161   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:57:54.379243   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:57:54.379255   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:57:54.379336   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:54.379486   36539 start.go:360] acquireMachinesLock for ha-168758-m03: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:57:54.379524   36539 start.go:364] duration metric: took 21.433µs to acquireMachinesLock for "ha-168758-m03"
	I1204 23:57:54.379541   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:54.379640   36539 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1204 23:57:54.382003   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:57:54.382100   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:54.382136   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:54.396586   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I1204 23:57:54.397026   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:54.397468   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:54.397491   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:54.397753   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:54.397933   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:57:54.398074   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:57:54.398206   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:57:54.398237   36539 client.go:168] LocalClient.Create starting
	I1204 23:57:54.398271   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:57:54.398314   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:57:54.398335   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:57:54.398396   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:57:54.398425   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:57:54.398442   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:57:54.398467   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:57:54.398479   36539 main.go:141] libmachine: (ha-168758-m03) Calling .PreCreateCheck
	I1204 23:57:54.398600   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:57:54.398924   36539 main.go:141] libmachine: Creating machine...
	I1204 23:57:54.398936   36539 main.go:141] libmachine: (ha-168758-m03) Calling .Create
	I1204 23:57:54.399103   36539 main.go:141] libmachine: (ha-168758-m03) Creating KVM machine...
	I1204 23:57:54.400268   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found existing default KVM network
	I1204 23:57:54.400380   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found existing private KVM network mk-ha-168758
	I1204 23:57:54.400501   36539 main.go:141] libmachine: (ha-168758-m03) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 ...
	I1204 23:57:54.400521   36539 main.go:141] libmachine: (ha-168758-m03) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:57:54.400579   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.400496   37304 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:57:54.400687   36539 main.go:141] libmachine: (ha-168758-m03) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:57:54.638854   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.638724   37304 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa...
	I1204 23:57:54.688376   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.688260   37304 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/ha-168758-m03.rawdisk...
	I1204 23:57:54.688404   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Writing magic tar header
	I1204 23:57:54.688416   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Writing SSH key tar header
	I1204 23:57:54.688424   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.688388   37304 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 ...
	I1204 23:57:54.688510   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03
	I1204 23:57:54.688554   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 (perms=drwx------)
	I1204 23:57:54.688585   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:57:54.688596   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:57:54.688610   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:57:54.688624   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:57:54.688642   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:57:54.688652   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:57:54.688658   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:57:54.688668   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:57:54.688679   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:57:54.688694   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:57:54.688711   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home
	I1204 23:57:54.688721   36539 main.go:141] libmachine: (ha-168758-m03) Creating domain...
	I1204 23:57:54.688733   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Skipping /home - not owner
	I1204 23:57:54.689655   36539 main.go:141] libmachine: (ha-168758-m03) define libvirt domain using xml: 
	I1204 23:57:54.689672   36539 main.go:141] libmachine: (ha-168758-m03) <domain type='kvm'>
	I1204 23:57:54.689678   36539 main.go:141] libmachine: (ha-168758-m03)   <name>ha-168758-m03</name>
	I1204 23:57:54.689682   36539 main.go:141] libmachine: (ha-168758-m03)   <memory unit='MiB'>2200</memory>
	I1204 23:57:54.689687   36539 main.go:141] libmachine: (ha-168758-m03)   <vcpu>2</vcpu>
	I1204 23:57:54.689691   36539 main.go:141] libmachine: (ha-168758-m03)   <features>
	I1204 23:57:54.689715   36539 main.go:141] libmachine: (ha-168758-m03)     <acpi/>
	I1204 23:57:54.689732   36539 main.go:141] libmachine: (ha-168758-m03)     <apic/>
	I1204 23:57:54.689738   36539 main.go:141] libmachine: (ha-168758-m03)     <pae/>
	I1204 23:57:54.689742   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.689747   36539 main.go:141] libmachine: (ha-168758-m03)   </features>
	I1204 23:57:54.689752   36539 main.go:141] libmachine: (ha-168758-m03)   <cpu mode='host-passthrough'>
	I1204 23:57:54.689758   36539 main.go:141] libmachine: (ha-168758-m03)   
	I1204 23:57:54.689763   36539 main.go:141] libmachine: (ha-168758-m03)   </cpu>
	I1204 23:57:54.689771   36539 main.go:141] libmachine: (ha-168758-m03)   <os>
	I1204 23:57:54.689775   36539 main.go:141] libmachine: (ha-168758-m03)     <type>hvm</type>
	I1204 23:57:54.689783   36539 main.go:141] libmachine: (ha-168758-m03)     <boot dev='cdrom'/>
	I1204 23:57:54.689786   36539 main.go:141] libmachine: (ha-168758-m03)     <boot dev='hd'/>
	I1204 23:57:54.689799   36539 main.go:141] libmachine: (ha-168758-m03)     <bootmenu enable='no'/>
	I1204 23:57:54.689805   36539 main.go:141] libmachine: (ha-168758-m03)   </os>
	I1204 23:57:54.689812   36539 main.go:141] libmachine: (ha-168758-m03)   <devices>
	I1204 23:57:54.689817   36539 main.go:141] libmachine: (ha-168758-m03)     <disk type='file' device='cdrom'>
	I1204 23:57:54.689828   36539 main.go:141] libmachine: (ha-168758-m03)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/boot2docker.iso'/>
	I1204 23:57:54.689833   36539 main.go:141] libmachine: (ha-168758-m03)       <target dev='hdc' bus='scsi'/>
	I1204 23:57:54.689840   36539 main.go:141] libmachine: (ha-168758-m03)       <readonly/>
	I1204 23:57:54.689844   36539 main.go:141] libmachine: (ha-168758-m03)     </disk>
	I1204 23:57:54.689852   36539 main.go:141] libmachine: (ha-168758-m03)     <disk type='file' device='disk'>
	I1204 23:57:54.689857   36539 main.go:141] libmachine: (ha-168758-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:57:54.689867   36539 main.go:141] libmachine: (ha-168758-m03)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/ha-168758-m03.rawdisk'/>
	I1204 23:57:54.689875   36539 main.go:141] libmachine: (ha-168758-m03)       <target dev='hda' bus='virtio'/>
	I1204 23:57:54.689882   36539 main.go:141] libmachine: (ha-168758-m03)     </disk>
	I1204 23:57:54.689886   36539 main.go:141] libmachine: (ha-168758-m03)     <interface type='network'>
	I1204 23:57:54.689891   36539 main.go:141] libmachine: (ha-168758-m03)       <source network='mk-ha-168758'/>
	I1204 23:57:54.689898   36539 main.go:141] libmachine: (ha-168758-m03)       <model type='virtio'/>
	I1204 23:57:54.689903   36539 main.go:141] libmachine: (ha-168758-m03)     </interface>
	I1204 23:57:54.689910   36539 main.go:141] libmachine: (ha-168758-m03)     <interface type='network'>
	I1204 23:57:54.689917   36539 main.go:141] libmachine: (ha-168758-m03)       <source network='default'/>
	I1204 23:57:54.689924   36539 main.go:141] libmachine: (ha-168758-m03)       <model type='virtio'/>
	I1204 23:57:54.689928   36539 main.go:141] libmachine: (ha-168758-m03)     </interface>
	I1204 23:57:54.689935   36539 main.go:141] libmachine: (ha-168758-m03)     <serial type='pty'>
	I1204 23:57:54.689957   36539 main.go:141] libmachine: (ha-168758-m03)       <target port='0'/>
	I1204 23:57:54.689973   36539 main.go:141] libmachine: (ha-168758-m03)     </serial>
	I1204 23:57:54.689983   36539 main.go:141] libmachine: (ha-168758-m03)     <console type='pty'>
	I1204 23:57:54.689987   36539 main.go:141] libmachine: (ha-168758-m03)       <target type='serial' port='0'/>
	I1204 23:57:54.689993   36539 main.go:141] libmachine: (ha-168758-m03)     </console>
	I1204 23:57:54.689997   36539 main.go:141] libmachine: (ha-168758-m03)     <rng model='virtio'>
	I1204 23:57:54.690005   36539 main.go:141] libmachine: (ha-168758-m03)       <backend model='random'>/dev/random</backend>
	I1204 23:57:54.690009   36539 main.go:141] libmachine: (ha-168758-m03)     </rng>
	I1204 23:57:54.690013   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.690016   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.690021   36539 main.go:141] libmachine: (ha-168758-m03)   </devices>
	I1204 23:57:54.690026   36539 main.go:141] libmachine: (ha-168758-m03) </domain>
	I1204 23:57:54.690035   36539 main.go:141] libmachine: (ha-168758-m03) 
	I1204 23:57:54.696711   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:45:00:1a in network default
	I1204 23:57:54.697413   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:54.697439   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring networks are active...
	I1204 23:57:54.698233   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring network default is active
	I1204 23:57:54.698580   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring network mk-ha-168758 is active
	I1204 23:57:54.698931   36539 main.go:141] libmachine: (ha-168758-m03) Getting domain xml...
	I1204 23:57:54.699649   36539 main.go:141] libmachine: (ha-168758-m03) Creating domain...
	I1204 23:57:55.920334   36539 main.go:141] libmachine: (ha-168758-m03) Waiting to get IP...
	I1204 23:57:55.921079   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:55.921466   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:55.921501   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:55.921456   37304 retry.go:31] will retry after 209.695163ms: waiting for machine to come up
	I1204 23:57:56.132761   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.133191   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.133212   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.133145   37304 retry.go:31] will retry after 335.842051ms: waiting for machine to come up
	I1204 23:57:56.470639   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.471086   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.471135   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.471032   37304 retry.go:31] will retry after 296.078556ms: waiting for machine to come up
	I1204 23:57:56.768275   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.768757   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.768777   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.768709   37304 retry.go:31] will retry after 507.499948ms: waiting for machine to come up
	I1204 23:57:57.277223   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:57.277662   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:57.277692   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:57.277613   37304 retry.go:31] will retry after 706.137269ms: waiting for machine to come up
	I1204 23:57:57.985335   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:57.985740   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:57.985763   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:57.985694   37304 retry.go:31] will retry after 901.171563ms: waiting for machine to come up
	I1204 23:57:58.888734   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:58.889195   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:58.889217   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:58.889148   37304 retry.go:31] will retry after 1.040272258s: waiting for machine to come up
	I1204 23:57:59.931201   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:59.931634   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:59.931662   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:59.931580   37304 retry.go:31] will retry after 1.47549599s: waiting for machine to come up
	I1204 23:58:01.408702   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:01.409147   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:01.409198   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:01.409117   37304 retry.go:31] will retry after 1.529862151s: waiting for machine to come up
	I1204 23:58:02.941039   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:02.941538   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:02.941562   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:02.941502   37304 retry.go:31] will retry after 1.818025113s: waiting for machine to come up
	I1204 23:58:04.760910   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:04.761392   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:04.761424   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:04.761351   37304 retry.go:31] will retry after 1.812557198s: waiting for machine to come up
	I1204 23:58:06.576024   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:06.576523   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:06.576553   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:06.576452   37304 retry.go:31] will retry after 3.417018451s: waiting for machine to come up
	I1204 23:58:09.994797   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:09.995228   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:09.995255   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:09.995175   37304 retry.go:31] will retry after 3.708395185s: waiting for machine to come up
	I1204 23:58:13.707953   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:13.708378   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:13.708407   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:13.708331   37304 retry.go:31] will retry after 5.536922009s: waiting for machine to come up
	I1204 23:58:19.251010   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.251548   36539 main.go:141] libmachine: (ha-168758-m03) Found IP for machine: 192.168.39.211
	I1204 23:58:19.251581   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has current primary IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.251590   36539 main.go:141] libmachine: (ha-168758-m03) Reserving static IP address...
	I1204 23:58:19.252021   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find host DHCP lease matching {name: "ha-168758-m03", mac: "52:54:00:25:7a:4c", ip: "192.168.39.211"} in network mk-ha-168758
	I1204 23:58:19.326562   36539 main.go:141] libmachine: (ha-168758-m03) Reserved static IP address: 192.168.39.211
	I1204 23:58:19.326591   36539 main.go:141] libmachine: (ha-168758-m03) Waiting for SSH to be available...
	I1204 23:58:19.326600   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Getting to WaitForSSH function...
	I1204 23:58:19.329626   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.330027   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.330055   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.330234   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using SSH client type: external
	I1204 23:58:19.330259   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa (-rw-------)
	I1204 23:58:19.330287   36539 main.go:141] libmachine: (ha-168758-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:58:19.330300   36539 main.go:141] libmachine: (ha-168758-m03) DBG | About to run SSH command:
	I1204 23:58:19.330316   36539 main.go:141] libmachine: (ha-168758-m03) DBG | exit 0
	I1204 23:58:19.458033   36539 main.go:141] libmachine: (ha-168758-m03) DBG | SSH cmd err, output: <nil>: 
	I1204 23:58:19.458296   36539 main.go:141] libmachine: (ha-168758-m03) KVM machine creation complete!
	I1204 23:58:19.458639   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:58:19.459211   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:19.459418   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:19.459604   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:58:19.459621   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetState
	I1204 23:58:19.460815   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:58:19.460827   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:58:19.460832   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:58:19.460837   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.463322   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.463682   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.463709   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.463819   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.463998   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.464138   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.464299   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.464466   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.464650   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.464661   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:58:19.569758   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:19.569785   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:58:19.569794   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.573011   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.573416   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.573434   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.573531   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.573755   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.573966   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.574140   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.574339   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.574549   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.574565   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:58:19.678634   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:58:19.678713   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:58:19.678731   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:58:19.678746   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.679001   36539 buildroot.go:166] provisioning hostname "ha-168758-m03"
	I1204 23:58:19.679025   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.679187   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.681791   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.682166   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.682203   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.682354   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.682571   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.682718   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.682893   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.683080   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.683252   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.683265   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758-m03 && echo "ha-168758-m03" | sudo tee /etc/hostname
	I1204 23:58:19.805397   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758-m03
	
	I1204 23:58:19.805429   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.808431   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.808856   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.808878   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.809012   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.809177   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.809288   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.809401   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.809650   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.809863   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.809887   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:58:19.923097   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:19.923127   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:58:19.923146   36539 buildroot.go:174] setting up certificates
	I1204 23:58:19.923155   36539 provision.go:84] configureAuth start
	I1204 23:58:19.923166   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.923476   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:19.926158   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.926547   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.926572   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.926691   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.928951   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.929352   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.929383   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.929531   36539 provision.go:143] copyHostCerts
	I1204 23:58:19.929556   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:58:19.929583   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:58:19.929592   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:58:19.929653   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:58:19.929721   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:58:19.929747   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:58:19.929754   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:58:19.929777   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:58:19.929820   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:58:19.929837   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:58:19.929844   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:58:19.929863   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:58:19.929908   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758-m03 san=[127.0.0.1 192.168.39.211 ha-168758-m03 localhost minikube]
	I1204 23:58:20.121053   36539 provision.go:177] copyRemoteCerts
	I1204 23:58:20.121105   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:58:20.121126   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.123934   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.124493   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.124528   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.124773   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.124964   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.125130   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.125289   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.208033   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:58:20.208112   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:58:20.233800   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:58:20.233885   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:58:20.258548   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:58:20.258616   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:58:20.282884   36539 provision.go:87] duration metric: took 359.716273ms to configureAuth
	I1204 23:58:20.282915   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:58:20.283121   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:20.283188   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.285836   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.286252   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.286287   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.286487   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.286665   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.286810   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.287008   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.287190   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:20.287357   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:20.287370   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:58:20.518772   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:58:20.518795   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:58:20.518802   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetURL
	I1204 23:58:20.520187   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using libvirt version 6000000
	I1204 23:58:20.522653   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.523105   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.523133   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.523291   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:58:20.523307   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:58:20.523315   36539 client.go:171] duration metric: took 26.125066489s to LocalClient.Create
	I1204 23:58:20.523341   36539 start.go:167] duration metric: took 26.125137032s to libmachine.API.Create "ha-168758"
	I1204 23:58:20.523357   36539 start.go:293] postStartSetup for "ha-168758-m03" (driver="kvm2")
	I1204 23:58:20.523373   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:58:20.523396   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.523640   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:58:20.523662   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.525716   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.526081   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.526107   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.526258   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.526431   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.526593   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.526718   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.611923   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:58:20.616136   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:58:20.616164   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:58:20.616222   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:58:20.616301   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:58:20.616310   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:58:20.616393   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:58:20.625581   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:58:20.649393   36539 start.go:296] duration metric: took 126.018678ms for postStartSetup
	I1204 23:58:20.649447   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:58:20.650084   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:20.652794   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.653173   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.653203   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.653515   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:58:20.653706   36539 start.go:128] duration metric: took 26.274055056s to createHost
	I1204 23:58:20.653726   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.657371   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.657844   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.657871   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.658096   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.658293   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.658453   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.658591   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.658709   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:20.658880   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:20.658893   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:58:20.762401   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356700.741731450
	
	I1204 23:58:20.762429   36539 fix.go:216] guest clock: 1733356700.741731450
	I1204 23:58:20.762440   36539 fix.go:229] Guest: 2024-12-04 23:58:20.74173145 +0000 UTC Remote: 2024-12-04 23:58:20.653717113 +0000 UTC m=+145.998402889 (delta=88.014337ms)
	I1204 23:58:20.762459   36539 fix.go:200] guest clock delta is within tolerance: 88.014337ms
	I1204 23:58:20.762465   36539 start.go:83] releasing machines lock for "ha-168758-m03", held for 26.382931771s
	I1204 23:58:20.762482   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.762743   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:20.765725   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.766178   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.766206   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.769126   36539 out.go:177] * Found network options:
	I1204 23:58:20.770828   36539 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.70
	W1204 23:58:20.772463   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 23:58:20.772484   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:58:20.772496   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773077   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773256   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773351   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:58:20.773389   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	W1204 23:58:20.773530   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 23:58:20.773562   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:58:20.773617   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:58:20.773705   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.776558   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.776588   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.776979   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.777009   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.777099   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.777132   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.777175   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.777373   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.777397   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.777602   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.777604   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.777735   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.777749   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.777829   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:21.014947   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:58:21.021071   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:58:21.021137   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:58:21.039078   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:58:21.039110   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:58:21.039177   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:58:21.058712   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:58:21.073746   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:58:21.073808   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:58:21.087957   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:58:21.101618   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:58:21.221628   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:58:21.378405   36539 docker.go:233] disabling docker service ...
	I1204 23:58:21.378477   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:58:21.394967   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:58:21.408722   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:58:21.562596   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:58:21.684473   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:58:21.698325   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:58:21.716302   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:58:21.716372   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.726130   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:58:21.726192   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.737846   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.748038   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.758637   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:58:21.769448   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.779664   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.796293   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.806603   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:58:21.816322   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:58:21.816373   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:58:21.829466   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:58:21.839041   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:21.954707   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:58:22.043532   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:58:22.043590   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:58:22.048235   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:58:22.048287   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:58:22.051918   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:58:22.091888   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:58:22.091959   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:58:22.119246   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:58:22.147844   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:58:22.149672   36539 out.go:177]   - env NO_PROXY=192.168.39.246
	I1204 23:58:22.151366   36539 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.70
	I1204 23:58:22.152919   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:22.155788   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:22.156132   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:22.156156   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:22.156371   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:58:22.161001   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:22.172781   36539 mustload.go:65] Loading cluster: ha-168758
	I1204 23:58:22.173021   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:22.173277   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:22.173312   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:22.188201   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I1204 23:58:22.188638   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:22.189068   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:22.189086   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:22.189487   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:22.189677   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:58:22.191201   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:58:22.191497   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:22.191540   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:22.206205   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I1204 23:58:22.206662   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:22.207121   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:22.207141   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:22.207423   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:22.207583   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:58:22.207723   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.211
	I1204 23:58:22.207733   36539 certs.go:194] generating shared ca certs ...
	I1204 23:58:22.207746   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.207850   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:58:22.207885   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:58:22.207894   36539 certs.go:256] generating profile certs ...
	I1204 23:58:22.207958   36539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:58:22.207980   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0
	I1204 23:58:22.207995   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.211 192.168.39.254]
	I1204 23:58:22.288746   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 ...
	I1204 23:58:22.288772   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0: {Name:mk226e78394d9a1d1481fe02645dfa0bdae47c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.288924   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0 ...
	I1204 23:58:22.288937   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0: {Name:mk254430c45779d3847a67c61786cc3c2c6ae80f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.289003   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:58:22.289131   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:58:22.289249   36539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:58:22.289263   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:58:22.289275   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:58:22.289288   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:58:22.289301   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:58:22.289314   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:58:22.289328   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:58:22.289344   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:58:22.306093   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:58:22.306200   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:58:22.306239   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:58:22.306253   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:58:22.306298   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:58:22.306334   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:58:22.306370   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:58:22.306426   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:58:22.306471   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.306493   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.306511   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.306552   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:58:22.309680   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:22.310101   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:58:22.310130   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:22.310297   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:58:22.310547   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:58:22.310740   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:58:22.310907   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:58:22.386340   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 23:58:22.392285   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 23:58:22.405695   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 23:58:22.409772   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1204 23:58:22.423743   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 23:58:22.427677   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 23:58:22.437721   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 23:58:22.441623   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 23:58:22.452543   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 23:58:22.461102   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 23:58:22.471271   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 23:58:22.477234   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 23:58:22.491216   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:58:22.516244   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:58:22.540200   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:58:22.565503   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:58:22.589637   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1204 23:58:22.614321   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 23:58:22.637487   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:58:22.661061   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:58:22.683528   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:58:22.706445   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:58:22.729085   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:58:22.750946   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 23:58:22.766659   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1204 23:58:22.782041   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 23:58:22.797608   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 23:58:22.812913   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 23:58:22.828464   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 23:58:22.843593   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 23:58:22.859636   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:58:22.865420   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:58:22.876063   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.880415   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.880469   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.885958   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:58:22.896219   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:58:22.906770   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.910894   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.910951   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.916407   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:58:22.926735   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:58:22.937057   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.941137   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.941190   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.946692   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:58:22.959164   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:58:22.963452   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:58:22.963502   36539 kubeadm.go:934] updating node {m03 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1204 23:58:22.963582   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:58:22.963624   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:58:22.963666   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:58:22.979996   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:58:22.980050   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:58:22.980122   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:58:22.989754   36539 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 23:58:22.989828   36539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 23:58:22.999376   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 23:58:22.999414   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 23:58:22.999429   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:58:22.999442   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:58:22.999460   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 23:58:22.999481   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:58:22.999484   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:58:22.999564   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:58:23.016751   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 23:58:23.016792   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 23:58:23.016813   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:58:23.016873   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 23:58:23.016893   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 23:58:23.016908   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:58:23.044749   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 23:58:23.044784   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 23:58:23.845384   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 23:58:23.856608   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 23:58:23.873561   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:58:23.888925   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 23:58:23.904283   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:58:23.908166   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:23.921626   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:24.054957   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:24.071310   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:58:24.071774   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:24.071826   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:24.087392   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I1204 23:58:24.087908   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:24.088426   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:24.088445   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:24.088872   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:24.089072   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:58:24.089234   36539 start.go:317] joinCluster: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:58:24.089406   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 23:58:24.089426   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:58:24.092795   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:24.093319   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:58:24.093346   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:24.093418   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:58:24.093603   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:58:24.093787   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:58:24.093916   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:58:24.257067   36539 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:58:24.257133   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8wphde.b7r2hfejhmn4frsm --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443"
	I1204 23:58:46.569250   36539 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8wphde.b7r2hfejhmn4frsm --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443": (22.312080737s)
	I1204 23:58:46.569302   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 23:58:47.113692   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758-m03 minikube.k8s.io/updated_at=2024_12_04T23_58_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=false
	I1204 23:58:47.254473   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-168758-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 23:58:47.374583   36539 start.go:319] duration metric: took 23.285344776s to joinCluster
	I1204 23:58:47.374658   36539 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:58:47.375005   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:47.375915   36539 out.go:177] * Verifying Kubernetes components...
	I1204 23:58:47.376822   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:47.645069   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:47.700465   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:58:47.700774   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 23:58:47.700882   36539 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I1204 23:58:47.701229   36539 node_ready.go:35] waiting up to 6m0s for node "ha-168758-m03" to be "Ready" ...
	I1204 23:58:47.701326   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:47.701346   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:47.701361   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:47.701372   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:47.704946   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:48.202242   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:48.202269   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:48.202277   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:48.202281   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:48.205645   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:48.701516   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:48.701536   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:48.701543   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:48.701547   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:48.704919   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.202192   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:49.202215   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:49.202225   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:49.202231   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:49.205567   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.701894   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:49.701912   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:49.701920   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:49.701928   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:49.705415   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.706113   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:50.201921   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:50.201950   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:50.201958   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:50.201963   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:50.205192   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:50.701850   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:50.701878   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:50.701890   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:50.701897   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:50.705478   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:51.201426   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:51.201446   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:51.201457   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:51.201463   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:51.204423   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:51.702262   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:51.702285   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:51.702293   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:51.702297   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:51.706096   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:51.706649   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:52.201560   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:52.201578   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:52.201585   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:52.201590   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:52.204456   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:52.701709   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:52.701729   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:52.701737   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:52.701742   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:52.705302   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:53.201587   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:53.201627   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:53.201637   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:53.201642   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:53.206081   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:58:53.702110   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:53.702136   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:53.702146   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:53.702151   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:53.707592   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:58:53.708884   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:54.202266   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:54.202287   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:54.202299   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:54.202303   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:54.205388   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:54.702184   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:54.702205   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:54.702212   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:54.702217   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:54.705854   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:55.202090   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:55.202109   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:55.202117   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:55.202122   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:55.205278   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:55.702366   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:55.702386   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:55.702393   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:55.702398   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:55.705911   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:56.202018   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:56.202039   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:56.202048   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:56.202052   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:56.205205   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:56.205770   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:56.702046   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:56.702068   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:56.702077   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:56.702079   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:56.705883   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:57.201534   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:57.201553   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:57.201560   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:57.201565   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:57.204718   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:57.702166   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:57.702192   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:57.702207   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:57.702215   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:57.706100   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:58.202178   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:58.202197   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:58.202207   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:58.202213   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:58.205266   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:58.205989   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:58.702432   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:58.702453   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:58.702461   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:58.702465   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:58.706164   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:59.201677   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:59.201698   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:59.201706   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:59.201710   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:59.204712   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:59.702080   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:59.702106   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:59.702114   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:59.702118   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:59.705833   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:00.202377   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:00.202398   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:00.202405   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:00.202410   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:00.204828   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:00.701624   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:00.701644   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:00.701652   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:00.701655   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:00.704805   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:00.705308   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:01.201668   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:01.201694   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:01.201704   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:01.201710   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:01.205149   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:01.702232   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:01.702261   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:01.702269   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:01.702275   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:01.706624   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:02.201616   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:02.201640   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:02.201652   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:02.201658   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:02.213640   36539 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1204 23:59:02.701482   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:02.701500   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:02.701507   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:02.701511   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:02.707401   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:59:02.707932   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:03.202348   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:03.202370   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:03.202382   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:03.202386   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:03.206086   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:03.702128   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:03.702153   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:03.702167   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:03.702176   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:03.705764   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:04.202048   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:04.202071   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:04.202079   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:04.202083   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:04.205585   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:04.702350   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:04.702371   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:04.702379   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:04.702383   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:04.706391   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.201695   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:05.201717   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.201724   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.201728   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.206270   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:05.207294   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:05.701694   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:05.701716   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.701727   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.701733   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.705389   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.706154   36539 node_ready.go:49] node "ha-168758-m03" has status "Ready":"True"
	I1204 23:59:05.706172   36539 node_ready.go:38] duration metric: took 18.004918776s for node "ha-168758-m03" to be "Ready" ...
	I1204 23:59:05.706181   36539 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:05.706250   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:05.706260   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.706266   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.706270   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.713324   36539 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 23:59:05.720724   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.720806   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rdbhr
	I1204 23:59:05.720815   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.720829   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.720837   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.724822   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.725428   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.725443   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.725451   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.725455   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.728496   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.729387   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.729410   36539 pod_ready.go:82] duration metric: took 8.658373ms for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.729423   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.729491   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgfvm
	I1204 23:59:05.729504   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.729516   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.729523   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.732217   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.732771   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.732784   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.732791   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.732795   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.735517   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.736112   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.736127   36539 pod_ready.go:82] duration metric: took 6.697955ms for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.736135   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.736189   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758
	I1204 23:59:05.736199   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.736206   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.736209   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.739373   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.740248   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.740263   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.740269   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.740273   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.742492   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.742913   36539 pod_ready.go:93] pod "etcd-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.742932   36539 pod_ready.go:82] duration metric: took 6.791875ms for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.742940   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.742986   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m02
	I1204 23:59:05.742994   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.743000   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.743004   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.745506   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.746152   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:05.746167   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.746173   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.746177   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.748564   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.749064   36539 pod_ready.go:93] pod "etcd-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.749082   36539 pod_ready.go:82] duration metric: took 6.135218ms for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.749094   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.902481   36539 request.go:632] Waited for 153.318139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m03
	I1204 23:59:05.902555   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m03
	I1204 23:59:05.902560   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.902568   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.902576   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.906398   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.102389   36539 request.go:632] Waited for 195.399546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:06.102443   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:06.102448   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.102476   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.102487   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.105971   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.106400   36539 pod_ready.go:93] pod "etcd-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.106416   36539 pod_ready.go:82] duration metric: took 357.316023ms for pod "etcd-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.106432   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.302535   36539 request.go:632] Waited for 196.013383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:59:06.302597   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:59:06.302608   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.302620   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.302626   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.305529   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:06.502522   36539 request.go:632] Waited for 196.356899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:06.502576   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:06.502581   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.502590   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.502598   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.507875   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:59:06.508456   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.508479   36539 pod_ready.go:82] duration metric: took 402.037453ms for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.508493   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.702449   36539 request.go:632] Waited for 193.867804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:59:06.702503   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:59:06.702509   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.702516   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.702521   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.706330   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.902289   36539 request.go:632] Waited for 195.373137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:06.902347   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:06.902355   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.902371   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.902380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.905544   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.906052   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.906069   36539 pod_ready.go:82] duration metric: took 397.563799ms for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.906080   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.102210   36539 request.go:632] Waited for 196.065026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m03
	I1204 23:59:07.102278   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m03
	I1204 23:59:07.102285   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.102298   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.102311   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.105396   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.302017   36539 request.go:632] Waited for 195.70193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:07.302103   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:07.302115   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.302147   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.302157   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.307086   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:07.307589   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:07.307617   36539 pod_ready.go:82] duration metric: took 401.528153ms for pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.307631   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.502581   36539 request.go:632] Waited for 194.865138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:59:07.502634   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:59:07.502639   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.502646   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.502652   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.506125   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.702202   36539 request.go:632] Waited for 195.372404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:07.702281   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:07.702288   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.702295   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.702302   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.705978   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.706658   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:07.706678   36539 pod_ready.go:82] duration metric: took 399.03822ms for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.706694   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.901924   36539 request.go:632] Waited for 195.154365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:59:07.902019   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:59:07.902029   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.902036   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.902039   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.905257   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.102494   36539 request.go:632] Waited for 196.34114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:08.102561   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:08.102566   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.102574   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.102581   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.106643   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:08.107401   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.107420   36539 pod_ready.go:82] duration metric: took 400.717315ms for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.107434   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.302582   36539 request.go:632] Waited for 195.013676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m03
	I1204 23:59:08.302656   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m03
	I1204 23:59:08.302666   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.302679   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.302688   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.308848   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:08.501837   36539 request.go:632] Waited for 192.289915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.501891   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.501918   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.501932   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.501962   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.504980   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:08.505549   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.505567   36539 pod_ready.go:82] duration metric: took 398.125286ms for pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.505577   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9hxpd" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.702722   36539 request.go:632] Waited for 197.087803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hxpd
	I1204 23:59:08.702788   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hxpd
	I1204 23:59:08.702797   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.702805   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.702809   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.706146   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.902503   36539 request.go:632] Waited for 195.802416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.902574   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.902581   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.902588   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.902593   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.905815   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.906362   36539 pod_ready.go:93] pod "kube-proxy-9hxpd" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.906385   36539 pod_ready.go:82] duration metric: took 400.800234ms for pod "kube-proxy-9hxpd" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.906398   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.101673   36539 request.go:632] Waited for 195.212044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:59:09.101755   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:59:09.101767   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.101780   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.101787   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.105084   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.302204   36539 request.go:632] Waited for 196.371294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:09.302277   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:09.302283   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.302290   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.302295   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.305317   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.306001   36539 pod_ready.go:93] pod "kube-proxy-s57v4" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:09.306020   36539 pod_ready.go:82] duration metric: took 399.614816ms for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.306032   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.502424   36539 request.go:632] Waited for 196.313347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:59:09.502475   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:59:09.502483   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.502494   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.502501   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.505899   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.702597   36539 request.go:632] Waited for 196.046362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:09.702661   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:09.702667   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.702675   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.702679   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.706005   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.706501   36539 pod_ready.go:93] pod "kube-proxy-vjhjm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:09.706518   36539 pod_ready.go:82] duration metric: took 400.479582ms for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.706528   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.902149   36539 request.go:632] Waited for 195.548343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:59:09.902203   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:59:09.902208   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.902215   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.902218   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.905375   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.102485   36539 request.go:632] Waited for 196.29891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:10.102549   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:10.102556   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.102567   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.102576   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.105620   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.106279   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.106297   36539 pod_ready.go:82] duration metric: took 399.763414ms for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.106307   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.302480   36539 request.go:632] Waited for 196.111079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:59:10.302561   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:59:10.302570   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.302581   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.302590   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.306007   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.501852   36539 request.go:632] Waited for 195.277553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:10.501917   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:10.501924   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.501957   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.501978   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.505309   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.505892   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.505913   36539 pod_ready.go:82] duration metric: took 399.598555ms for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.505927   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.702021   36539 request.go:632] Waited for 195.988022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m03
	I1204 23:59:10.702085   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m03
	I1204 23:59:10.702093   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.702104   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.702111   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.705373   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.902383   36539 request.go:632] Waited for 196.36776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:10.902455   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:10.902463   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.902473   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.902487   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.905908   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.906376   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.906394   36539 pod_ready.go:82] duration metric: took 400.458551ms for pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.906408   36539 pod_ready.go:39] duration metric: took 5.200215763s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:10.906427   36539 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:59:10.906489   36539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:59:10.925438   36539 api_server.go:72] duration metric: took 23.550742485s to wait for apiserver process to appear ...
	I1204 23:59:10.925468   36539 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:59:10.925490   36539 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1204 23:59:10.929840   36539 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1204 23:59:10.929894   36539 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1204 23:59:10.929901   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.929909   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.929916   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.930671   36539 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 23:59:10.930740   36539 api_server.go:141] control plane version: v1.31.2
	I1204 23:59:10.930758   36539 api_server.go:131] duration metric: took 5.282883ms to wait for apiserver health ...
	I1204 23:59:10.930771   36539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:59:11.102180   36539 request.go:632] Waited for 171.329772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.102245   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.102253   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.102264   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.102275   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.108643   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:11.116294   36539 system_pods.go:59] 24 kube-system pods found
	I1204 23:59:11.116322   36539 system_pods.go:61] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:59:11.116329   36539 system_pods.go:61] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:59:11.116335   36539 system_pods.go:61] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:59:11.116341   36539 system_pods.go:61] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:59:11.116346   36539 system_pods.go:61] "etcd-ha-168758-m03" [9912285e-4d34-4848-9cd7-ebfe1e2fc6b6] Running
	I1204 23:59:11.116350   36539 system_pods.go:61] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:59:11.116356   36539 system_pods.go:61] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:59:11.116361   36539 system_pods.go:61] "kindnet-qzjm2" [8cb0cd7b-1a4f-42f3-ad4b-2bb3f353b1ff] Running
	I1204 23:59:11.116367   36539 system_pods.go:61] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:59:11.116375   36539 system_pods.go:61] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:59:11.116381   36539 system_pods.go:61] "kube-apiserver-ha-168758-m03" [d38eb205-d9e9-422e-9f0a-92b5e27fcf06] Running
	I1204 23:59:11.116388   36539 system_pods.go:61] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:59:11.116399   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:59:11.116406   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m03" [6701fec1-6fd3-4ec0-a269-c55b53c7f17e] Running
	I1204 23:59:11.116415   36539 system_pods.go:61] "kube-proxy-9hxpd" [61053eb8-4fdb-48be-84a1-bb21e54ecfa5] Running
	I1204 23:59:11.116421   36539 system_pods.go:61] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:59:11.116429   36539 system_pods.go:61] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:59:11.116435   36539 system_pods.go:61] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:59:11.116444   36539 system_pods.go:61] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:59:11.116450   36539 system_pods.go:61] "kube-scheduler-ha-168758-m03" [3e543bde-2331-4e7d-8301-d9d2379d3670] Running
	I1204 23:59:11.116458   36539 system_pods.go:61] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:59:11.116464   36539 system_pods.go:61] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:59:11.116472   36539 system_pods.go:61] "kube-vip-ha-168758-m03" [2ca9e0a2-ef37-4420-b138-1093dd69e0d3] Running
	I1204 23:59:11.116480   36539 system_pods.go:61] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:59:11.116490   36539 system_pods.go:74] duration metric: took 185.709136ms to wait for pod list to return data ...
	I1204 23:59:11.116504   36539 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:59:11.302342   36539 request.go:632] Waited for 185.765456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:59:11.302393   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:59:11.302399   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.302405   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.302409   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.305795   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:11.305915   36539 default_sa.go:45] found service account: "default"
	I1204 23:59:11.305932   36539 default_sa.go:55] duration metric: took 189.416579ms for default service account to be created ...
	I1204 23:59:11.305962   36539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:59:11.502392   36539 request.go:632] Waited for 196.360375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.502441   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.502446   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.502454   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.502457   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.509349   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:11.515542   36539 system_pods.go:86] 24 kube-system pods found
	I1204 23:59:11.515567   36539 system_pods.go:89] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:59:11.515572   36539 system_pods.go:89] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:59:11.515576   36539 system_pods.go:89] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:59:11.515580   36539 system_pods.go:89] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:59:11.515584   36539 system_pods.go:89] "etcd-ha-168758-m03" [9912285e-4d34-4848-9cd7-ebfe1e2fc6b6] Running
	I1204 23:59:11.515587   36539 system_pods.go:89] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:59:11.515592   36539 system_pods.go:89] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:59:11.515595   36539 system_pods.go:89] "kindnet-qzjm2" [8cb0cd7b-1a4f-42f3-ad4b-2bb3f353b1ff] Running
	I1204 23:59:11.515599   36539 system_pods.go:89] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:59:11.515603   36539 system_pods.go:89] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:59:11.515607   36539 system_pods.go:89] "kube-apiserver-ha-168758-m03" [d38eb205-d9e9-422e-9f0a-92b5e27fcf06] Running
	I1204 23:59:11.515611   36539 system_pods.go:89] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:59:11.515615   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:59:11.515620   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m03" [6701fec1-6fd3-4ec0-a269-c55b53c7f17e] Running
	I1204 23:59:11.515624   36539 system_pods.go:89] "kube-proxy-9hxpd" [61053eb8-4fdb-48be-84a1-bb21e54ecfa5] Running
	I1204 23:59:11.515627   36539 system_pods.go:89] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:59:11.515630   36539 system_pods.go:89] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:59:11.515633   36539 system_pods.go:89] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:59:11.515636   36539 system_pods.go:89] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:59:11.515639   36539 system_pods.go:89] "kube-scheduler-ha-168758-m03" [3e543bde-2331-4e7d-8301-d9d2379d3670] Running
	I1204 23:59:11.515642   36539 system_pods.go:89] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:59:11.515645   36539 system_pods.go:89] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:59:11.515648   36539 system_pods.go:89] "kube-vip-ha-168758-m03" [2ca9e0a2-ef37-4420-b138-1093dd69e0d3] Running
	I1204 23:59:11.515654   36539 system_pods.go:89] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:59:11.515660   36539 system_pods.go:126] duration metric: took 209.689107ms to wait for k8s-apps to be running ...
	I1204 23:59:11.515669   36539 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:59:11.515708   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:59:11.532158   36539 system_svc.go:56] duration metric: took 16.479991ms WaitForService to wait for kubelet
	I1204 23:59:11.532192   36539 kubeadm.go:582] duration metric: took 24.157502148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:59:11.532215   36539 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:59:11.702618   36539 request.go:632] Waited for 170.317658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1204 23:59:11.702682   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1204 23:59:11.702690   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.702701   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.702708   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.706495   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:11.707279   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707300   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707313   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707318   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707323   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707329   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707335   36539 node_conditions.go:105] duration metric: took 175.114412ms to run NodePressure ...
	I1204 23:59:11.707353   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:59:11.707385   36539 start.go:255] writing updated cluster config ...
	I1204 23:59:11.707662   36539 ssh_runner.go:195] Run: rm -f paused
	I1204 23:59:11.758613   36539 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 23:59:11.760643   36539 out.go:177] * Done! kubectl is now configured to use "ha-168758" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.253799351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8efc53e-02b4-43e5-801a-fb1f83f32b2c name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.254825687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cce37675-acfc-403a-8ab9-494275786504 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.255454975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356967255414278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cce37675-acfc-403a-8ab9-494275786504 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.255919752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d2b4be5-0c0b-4700-980e-f264e6a36b93 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.256146259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d2b4be5-0c0b-4700-980e-f264e6a36b93 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.256410544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d2b4be5-0c0b-4700-980e-f264e6a36b93 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.292124522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa1284b1-bd72-49d6-a56c-d3a6d9066460 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.292210180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa1284b1-bd72-49d6-a56c-d3a6d9066460 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.293373297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb2cb57b-c7cc-443f-bcc5-c6d03daeea73 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.294030878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356967294000411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb2cb57b-c7cc-443f-bcc5-c6d03daeea73 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.294908693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18b1e4a8-14fd-43ce-9ff3-d31f49d2bd9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.295004170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18b1e4a8-14fd-43ce-9ff3-d31f49d2bd9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.295442489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18b1e4a8-14fd-43ce-9ff3-d31f49d2bd9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.328291092Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=79debe26-e042-443c-937f-335b32f761a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.328550862Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-qnw8r,Uid:b7d9809d-958e-4df8-8b72-435c46d8030d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356753019126940,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T23:59:12.700590185Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rdbhr,Uid:84a1588a-ae3a-4cf6-b483-eeee3e386743,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1733356613461349496,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84a1588a-ae3a-4cf6-b483-eeee3e386743,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T23:56:53.132089729Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zgfvm,Uid:aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356613451482163,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-12-04T23:56:53.136743852Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356613446858198,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-04T23:56:53.139132829Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&PodSandboxMetadata{Name:kube-proxy-vjhjm,Uid:e00e9325-7046-41b0-ade6-8cac8362e399,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356598018310539,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-12-04T23:56:37.679990231Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&PodSandboxMetadata{Name:kindnet-9l7jr,Uid:b4b65009-de28-48a8-afc3-66d8a0aaf8a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356598004149272,Labels:map[string]string{app: kindnet,controller-revision-hash: 65ddb8b87b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T23:56:37.681820018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-168758,Uid:38b79a47425cbd3637dfbec9a8cbddf4,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1733356587522866200,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{kubernetes.io/config.hash: 38b79a47425cbd3637dfbec9a8cbddf4,kubernetes.io/config.seen: 2024-12-04T23:56:27.050071793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-168758,Uid:1f069d7eea6c7110e065e170679d8257,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356587517493558,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1f06
9d7eea6c7110e065e170679d8257,kubernetes.io/config.seen: 2024-12-04T23:56:27.050070896Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-168758,Uid:3e5c96a4dd7f3f8d2d532f784e777953,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356587516376717,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e5c96a4dd7f3f8d2d532f784e777953,kubernetes.io/config.seen: 2024-12-04T23:56:27.050069550Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&PodSandboxMetadata{Name:etcd-ha-168758,Uid:ff4359e8
a62956d9be4ff300e54d2dca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356587497832763,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.246:2379,kubernetes.io/config.hash: ff4359e8a62956d9be4ff300e54d2dca,kubernetes.io/config.seen: 2024-12-04T23:56:27.050072775Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-168758,Uid:216b15c2b5f2b8f51d538046badfbcd1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356587492794551,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-168758,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.246:8443,kubernetes.io/config.hash: 216b15c2b5f2b8f51d538046badfbcd1,kubernetes.io/config.seen: 2024-12-04T23:56:27.050065735Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=79debe26-e042-443c-937f-335b32f761a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.329228309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=932611e7-a459-47d2-ba91-5314106bd3bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.329296714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=932611e7-a459-47d2-ba91-5314106bd3bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.329527682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=932611e7-a459-47d2-ba91-5314106bd3bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.331504225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c73405e-e117-43fc-8e7a-55e7fb01765d name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.331575638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c73405e-e117-43fc-8e7a-55e7fb01765d name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.332577948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb37605f-3de4-4c10-85e8-0ccfc705928d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.333214937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356967333195558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb37605f-3de4-4c10-85e8-0ccfc705928d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.333688459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fd1d7f5-0a05-4210-a621-0551f4eb410b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.333753941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fd1d7f5-0a05-4210-a621-0551f4eb410b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:47 ha-168758 crio[658]: time="2024-12-05 00:02:47.334029460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fd1d7f5-0a05-4210-a621-0551f4eb410b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db7ff0ab07801       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a10916ad1046       busybox-7dff88458-qnw8r
	acd6b28e94859       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   b695edf511c5e       coredns-7c65d6cfc9-zgfvm
	775689612e70e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   b16e3868f6d03       coredns-7c65d6cfc9-rdbhr
	ee81cf4f88af4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   bb748d0b54fa2       storage-provisioner
	4110b2797a4e4       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   f35f03db32634       kindnet-9l7jr
	54fe1d8daa049       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   29cc806b9c066       kube-proxy-vjhjm
	f9cdaa348b12d       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   399ac5ad91348       kube-vip-ha-168758
	f83a840ab3b70       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9e97775967d5d       kube-scheduler-ha-168758
	1729d6a102a14       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   7fa6e474a7667       kube-controller-manager-ha-168758
	9ba9268cc4c55       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c6a8751c9bc08       etcd-ha-168758
	413e537381433       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   05e1ede7541eb       kube-apiserver-ha-168758
	
	
	==> coredns [775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977] <==
	[INFO] 10.244.1.2:49235 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242119s
	[INFO] 10.244.1.2:36162 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00009811s
	[INFO] 10.244.1.2:39283 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118789s
	[INFO] 10.244.2.2:54111 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004220931s
	[INFO] 10.244.2.2:32931 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237164s
	[INFO] 10.244.2.2:41295 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142312s
	[INFO] 10.244.2.2:53943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008473674s
	[INFO] 10.244.2.2:46770 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202477s
	[INFO] 10.244.2.2:55371 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129803s
	[INFO] 10.244.0.4:54472 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128336s
	[INFO] 10.244.0.4:56711 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094635s
	[INFO] 10.244.1.2:59375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133764s
	[INFO] 10.244.1.2:35780 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111638s
	[INFO] 10.244.1.2:34975 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070467s
	[INFO] 10.244.2.2:45753 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154754s
	[INFO] 10.244.2.2:33451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215593s
	[INFO] 10.244.2.2:40356 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085623s
	[INFO] 10.244.0.4:46816 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012269s
	[INFO] 10.244.0.4:47330 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069513s
	[INFO] 10.244.0.4:57640 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114217s
	[INFO] 10.244.1.2:33801 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096362s
	[INFO] 10.244.2.2:37705 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000200759s
	[INFO] 10.244.2.2:42211 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000190547s
	[INFO] 10.244.0.4:33141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134356s
	[INFO] 10.244.0.4:43968 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006677s
	
	
	==> coredns [acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c] <==
	[INFO] 10.244.2.2:49728 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316528s
	[INFO] 10.244.0.4:43307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166964s
	[INFO] 10.244.0.4:56572 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002062823s
	[INFO] 10.244.0.4:59507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000465964s
	[INFO] 10.244.0.4:41018 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122656s
	[INFO] 10.244.0.4:58493 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154857s
	[INFO] 10.244.0.4:52626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075295s
	[INFO] 10.244.1.2:49458 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001666843s
	[INFO] 10.244.1.2:41288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102856s
	[INFO] 10.244.1.2:33580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001685081s
	[INFO] 10.244.1.2:42118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171665s
	[INFO] 10.244.1.2:59523 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108873s
	[INFO] 10.244.2.2:49031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000292612s
	[INFO] 10.244.0.4:56781 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089871s
	[INFO] 10.244.1.2:52724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218204s
	[INFO] 10.244.1.2:48399 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104187s
	[INFO] 10.244.1.2:60140 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087293s
	[INFO] 10.244.2.2:47493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233961s
	[INFO] 10.244.2.2:38060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000321274s
	[INFO] 10.244.0.4:49076 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086072s
	[INFO] 10.244.0.4:47481 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000284893s
	[INFO] 10.244.1.2:55588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252847s
	[INFO] 10.244.1.2:59106 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000257823s
	[INFO] 10.244.1.2:54163 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122038s
	[INFO] 10.244.1.2:39518 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016158s
	
	
	==> describe nodes <==
	Name:               ha-168758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_56_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-168758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cbaecf7e4184dafa60b26978754f7be
	  System UUID:                4cbaecf7-e418-4daf-a60b-26978754f7be
	  Boot ID:                    4bc94051-ae8a-40e7-9636-681faf9bf1fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qnw8r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 coredns-7c65d6cfc9-rdbhr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 coredns-7c65d6cfc9-zgfvm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 etcd-ha-168758                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m13s
	  kube-system                 kindnet-9l7jr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-168758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-168758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-vjhjm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-168758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-vip-ha-168758                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node ha-168758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s  kubelet          Node ha-168758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s  kubelet          Node ha-168758 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	  Normal  NodeReady                5m54s  kubelet          Node ha-168758 status is now: NodeReady
	  Normal  RegisteredNode           5m10s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	  Normal  RegisteredNode           3m55s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	
	
	Name:               ha-168758-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_57_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:57:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:00:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-168758-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f10b54702f3c44dfba5cac04964ccd96
	  System UUID:                f10b5470-2f3c-44df-ba5c-ac04964ccd96
	  Boot ID:                    33cbfb56-c30f-45e3-b72b-7b95c35b9817
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vzls9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-168758-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m18s
	  kube-system                 kindnet-f4s96                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m19s
	  kube-system                 kube-apiserver-ha-168758-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-168758-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-s57v4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-scheduler-ha-168758-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-168758-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  Starting                 5m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m19s (x2 over 5m19s)  kubelet          Node ha-168758-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x2 over 5m19s)  kubelet          Node ha-168758-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x2 over 5m19s)  kubelet          Node ha-168758-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  NodeReady                4m57s                  kubelet          Node ha-168758-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-168758-m02 status is now: NodeNotReady
	
	
	Name:               ha-168758-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_58_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:58:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:59:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    ha-168758-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45e57e3f54964fb3a3aeed03051209b3
	  System UUID:                45e57e3f-5496-4fb3-a3ae-ed03051209b3
	  Boot ID:                    8e17ecfe-4d6f-4380-b2ec-d58e3528bee5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-brzwz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-168758-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m2s
	  kube-system                 kindnet-qzjm2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-apiserver-ha-168758-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-ha-168758-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-9hxpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-ha-168758-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 kube-vip-ha-168758-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-168758-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-168758-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-168758-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	  Normal  RegisteredNode           3m55s                node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	
	
	Name:               ha-168758-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_59_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:59:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 00:00:17 +0000   Thu, 05 Dec 2024 00:00:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ha-168758-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 52df6477abaa40eb8464d395e2a11a95
	  System UUID:                52df6477-abaa-40eb-8464-d395e2a11a95
	  Boot ID:                    af84ed2d-18aa-4219-8f2e-d7ea7a1e76fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-964c5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-6r2gb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-168758-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-168758-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-168758-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-168758-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 4 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037313] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 4 23:56] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.965232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.552645] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.572491] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.058309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061191] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.184233] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.102774] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.251131] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +3.779101] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.425619] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.056248] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.972412] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.078582] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.128288] kauditd_printk_skb: 33 callbacks suppressed
	[ +14.702984] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 4 23:57] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8] <==
	{"level":"warn","ts":"2024-12-05T00:02:47.575199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.583802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.589586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.598087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.603017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.607852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.614356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.618630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.621817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.628822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.634503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.641068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.648446Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.649442Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.652656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.658995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.664682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.681236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.685127Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.688885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.694458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.694867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.704101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.712999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:47.747503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:02:47 up 6 min,  0 users,  load average: 0.20, 0.35, 0.19
	Linux ha-168758 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302] <==
	I1205 00:02:12.977135       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:22.982357       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:22.982450       1 main.go:301] handling current node
	I1205 00:02:22.982479       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:22.982497       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:22.983147       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:22.983185       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:22.983471       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:22.983493       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:32.975709       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:32.975834       1 main.go:301] handling current node
	I1205 00:02:32.975867       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:32.975885       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:32.976196       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:32.976232       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:32.976329       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:32.976349       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:42.976549       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:42.977185       1 main.go:301] handling current node
	I1205 00:02:42.977331       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:42.977368       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:42.977798       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:42.977846       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:42.978121       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:42.978436       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619] <==
	W1204 23:56:31.867915       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I1204 23:56:31.869149       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:56:31.873256       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 23:56:32.153052       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 23:56:33.976231       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 23:56:33.989474       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 23:56:34.001309       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 23:56:37.651703       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1204 23:56:37.764778       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1204 23:59:17.616400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33356: use of closed network connection
	E1204 23:59:17.796618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33360: use of closed network connection
	E1204 23:59:17.975175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33372: use of closed network connection
	E1204 23:59:18.159453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33388: use of closed network connection
	E1204 23:59:18.340465       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33408: use of closed network connection
	E1204 23:59:18.525899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33424: use of closed network connection
	E1204 23:59:18.699204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33448: use of closed network connection
	E1204 23:59:18.871260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33456: use of closed network connection
	E1204 23:59:19.044471       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33468: use of closed network connection
	E1204 23:59:19.324768       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33494: use of closed network connection
	E1204 23:59:19.503485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33512: use of closed network connection
	E1204 23:59:19.681162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33532: use of closed network connection
	E1204 23:59:19.864655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33548: use of closed network connection
	E1204 23:59:20.038197       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33568: use of closed network connection
	E1204 23:59:20.209773       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33584: use of closed network connection
	W1205 00:00:41.898434       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.211 192.168.39.246]
	
	
	==> kube-controller-manager [1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3] <==
	E1204 23:59:46.495910       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2xb9f failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2xb9f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1204 23:59:46.913622       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-168758-m04\" does not exist"
	I1204 23:59:46.947226       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-168758-m04" podCIDRs=["10.244.3.0/24"]
	I1204 23:59:46.947330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:46.947390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:46.951979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.179648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.195594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.195618       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-168758-m04"
	I1204 23:59:47.217355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.297136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.621862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:57.226529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:06.702516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:06.703393       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-168758-m04"
	I1205 00:00:06.718143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:07.112438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:17.922402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:01:07.143195       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-168758-m04"
	I1205 00:01:07.144088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.170589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.248294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.276459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.988319ms"
	I1205 00:01:07.277110       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.518µs"
	I1205 00:01:12.432512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	
	
	==> kube-proxy [54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:56:38.767631       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:56:38.783503       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	E1204 23:56:38.783650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:56:38.826822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:56:38.826910       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:56:38.827013       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:56:38.829514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:56:38.830207       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:56:38.830274       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:56:38.832929       1 config.go:199] "Starting service config controller"
	I1204 23:56:38.833423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:56:38.833660       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:56:38.833695       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:56:38.836133       1 config.go:328] "Starting node config controller"
	I1204 23:56:38.836155       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:56:38.934569       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:56:38.934627       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:56:38.936209       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579] <==
	W1204 23:56:30.307297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:30.307486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:30.307308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:30.307579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.211435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 23:56:31.211491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.247731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:56:31.247775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.414104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:31.414215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.426986       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:56:31.427111       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 23:56:31.430415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:31.430490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.446966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:56:31.447114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.462015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:56:31.462105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.482639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 23:56:31.482672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1204 23:56:33.294482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1204 23:58:43.853823       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qzjm2\": pod kindnet-qzjm2 is already assigned to node \"ha-168758-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qzjm2" node="ha-168758-m03"
	E1204 23:58:43.856189       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qzjm2\": pod kindnet-qzjm2 is already assigned to node \"ha-168758-m03\"" pod="kube-system/kindnet-qzjm2"
	E1204 23:59:46.998972       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6r2gb\": pod kube-proxy-6r2gb is already assigned to node \"ha-168758-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6r2gb" node="ha-168758-m04"
	E1204 23:59:46.999075       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6r2gb\": pod kube-proxy-6r2gb is already assigned to node \"ha-168758-m04\"" pod="kube-system/kube-proxy-6r2gb"
	
	
	==> kubelet <==
	Dec 05 00:01:33 ha-168758 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 00:01:33 ha-168758 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 00:01:33 ha-168758 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 00:01:33 ha-168758 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 00:01:34 ha-168758 kubelet[1298]: E1205 00:01:34.017719    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356894017416921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:34 ha-168758 kubelet[1298]: E1205 00:01:34.017769    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356894017416921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:44 ha-168758 kubelet[1298]: E1205 00:01:44.019770    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356904019230886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:44 ha-168758 kubelet[1298]: E1205 00:01:44.020092    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356904019230886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:54 ha-168758 kubelet[1298]: E1205 00:01:54.022125    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356914021750332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:54 ha-168758 kubelet[1298]: E1205 00:01:54.022148    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356914021750332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:04 ha-168758 kubelet[1298]: E1205 00:02:04.026278    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356924025030004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:04 ha-168758 kubelet[1298]: E1205 00:02:04.026320    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356924025030004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:14 ha-168758 kubelet[1298]: E1205 00:02:14.029261    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356934028834031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:14 ha-168758 kubelet[1298]: E1205 00:02:14.029299    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356934028834031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:24 ha-168758 kubelet[1298]: E1205 00:02:24.032005    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356944031328211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:24 ha-168758 kubelet[1298]: E1205 00:02:24.032327    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356944031328211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:33 ha-168758 kubelet[1298]: E1205 00:02:33.944974    1298 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 00:02:33 ha-168758 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 00:02:34 ha-168758 kubelet[1298]: E1205 00:02:34.034516    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356954034156458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:34 ha-168758 kubelet[1298]: E1205 00:02:34.034558    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356954034156458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:44 ha-168758 kubelet[1298]: E1205 00:02:44.037075    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356964036447971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:44 ha-168758 kubelet[1298]: E1205 00:02:44.037120    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356964036447971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-168758 -n ha-168758
helpers_test.go:261: (dbg) Run:  kubectl --context ha-168758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1205 00:02:50.875835   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.406713019s)
ha_test.go:415: expected profile "ha-168758" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-168758\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-168758\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-168758\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.246\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.70\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.211\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.118\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevi
rt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\
",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-168758 -n ha-168758
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 logs -n 25: (1.333932292s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m03_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m04 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp testdata/cp-test.txt                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m04_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03:/home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m03 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-168758 node stop m02 -v=7                                                     | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:55:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:55:54.691284   36539 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:55:54.691414   36539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:54.691424   36539 out.go:358] Setting ErrFile to fd 2...
	I1204 23:55:54.691432   36539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:54.691628   36539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1204 23:55:54.692240   36539 out.go:352] Setting JSON to false
	I1204 23:55:54.693153   36539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5901,"bootTime":1733350654,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:55:54.693247   36539 start.go:139] virtualization: kvm guest
	I1204 23:55:54.695552   36539 out.go:177] * [ha-168758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:55:54.696955   36539 notify.go:220] Checking for updates...
	I1204 23:55:54.696986   36539 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:55:54.698203   36539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:55:54.699583   36539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:55:54.700898   36539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:54.702193   36539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:55:54.703640   36539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:55:54.705053   36539 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:55:54.739286   36539 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 23:55:54.740386   36539 start.go:297] selected driver: kvm2
	I1204 23:55:54.740398   36539 start.go:901] validating driver "kvm2" against <nil>
	I1204 23:55:54.740409   36539 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:55:54.741134   36539 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:55:54.741212   36539 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 23:55:54.755884   36539 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 23:55:54.755947   36539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:55:54.756215   36539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:55:54.756246   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:55:54.756288   36539 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 23:55:54.756296   36539 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:55:54.756346   36539 start.go:340] cluster config:
	{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1204 23:55:54.756434   36539 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:55:54.758105   36539 out.go:177] * Starting "ha-168758" primary control-plane node in "ha-168758" cluster
	I1204 23:55:54.759313   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:55:54.759357   36539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:55:54.759363   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:55:54.759483   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:55:54.759495   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:55:54.759812   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:55:54.759833   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json: {Name:mk6e7aaeeb71d483ab659741dd4a36c7ed0cf735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:55:54.759956   36539 start.go:360] acquireMachinesLock for ha-168758: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:55:54.759986   36539 start.go:364] duration metric: took 15.99µs to acquireMachinesLock for "ha-168758"
	I1204 23:55:54.759999   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:55:54.760065   36539 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 23:55:54.761682   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:55:54.761785   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:55:54.761819   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:55:54.775974   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I1204 23:55:54.776416   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:55:54.776960   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:55:54.776978   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:55:54.777327   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:55:54.777500   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:55:54.777669   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:55:54.777814   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:55:54.777843   36539 client.go:168] LocalClient.Create starting
	I1204 23:55:54.777875   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:55:54.777915   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:55:54.777928   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:55:54.778006   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:55:54.778033   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:55:54.778049   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:55:54.778066   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:55:54.778079   36539 main.go:141] libmachine: (ha-168758) Calling .PreCreateCheck
	I1204 23:55:54.778456   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:55:54.778813   36539 main.go:141] libmachine: Creating machine...
	I1204 23:55:54.778825   36539 main.go:141] libmachine: (ha-168758) Calling .Create
	I1204 23:55:54.778947   36539 main.go:141] libmachine: (ha-168758) Creating KVM machine...
	I1204 23:55:54.780063   36539 main.go:141] libmachine: (ha-168758) DBG | found existing default KVM network
	I1204 23:55:54.780869   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:54.780727   36562 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1204 23:55:54.780949   36539 main.go:141] libmachine: (ha-168758) DBG | created network xml: 
	I1204 23:55:54.780981   36539 main.go:141] libmachine: (ha-168758) DBG | <network>
	I1204 23:55:54.780994   36539 main.go:141] libmachine: (ha-168758) DBG |   <name>mk-ha-168758</name>
	I1204 23:55:54.781002   36539 main.go:141] libmachine: (ha-168758) DBG |   <dns enable='no'/>
	I1204 23:55:54.781007   36539 main.go:141] libmachine: (ha-168758) DBG |   
	I1204 23:55:54.781012   36539 main.go:141] libmachine: (ha-168758) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 23:55:54.781017   36539 main.go:141] libmachine: (ha-168758) DBG |     <dhcp>
	I1204 23:55:54.781022   36539 main.go:141] libmachine: (ha-168758) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 23:55:54.781030   36539 main.go:141] libmachine: (ha-168758) DBG |     </dhcp>
	I1204 23:55:54.781067   36539 main.go:141] libmachine: (ha-168758) DBG |   </ip>
	I1204 23:55:54.781085   36539 main.go:141] libmachine: (ha-168758) DBG |   
	I1204 23:55:54.781094   36539 main.go:141] libmachine: (ha-168758) DBG | </network>
	I1204 23:55:54.781107   36539 main.go:141] libmachine: (ha-168758) DBG | 
	I1204 23:55:54.786047   36539 main.go:141] libmachine: (ha-168758) DBG | trying to create private KVM network mk-ha-168758 192.168.39.0/24...
	I1204 23:55:54.850461   36539 main.go:141] libmachine: (ha-168758) DBG | private KVM network mk-ha-168758 192.168.39.0/24 created
	I1204 23:55:54.850492   36539 main.go:141] libmachine: (ha-168758) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 ...
	I1204 23:55:54.850520   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:54.850437   36562 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:54.850537   36539 main.go:141] libmachine: (ha-168758) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:55:54.850557   36539 main.go:141] libmachine: (ha-168758) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:55:55.110330   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.110147   36562 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa...
	I1204 23:55:55.305236   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.305108   36562 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/ha-168758.rawdisk...
	I1204 23:55:55.305287   36539 main.go:141] libmachine: (ha-168758) DBG | Writing magic tar header
	I1204 23:55:55.305304   36539 main.go:141] libmachine: (ha-168758) DBG | Writing SSH key tar header
	I1204 23:55:55.305328   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.305248   36562 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 ...
	I1204 23:55:55.305350   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758
	I1204 23:55:55.305432   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 (perms=drwx------)
	I1204 23:55:55.305462   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:55:55.305478   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:55:55.305493   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:55:55.305502   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:55:55.305511   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:55:55.305522   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:55:55.305536   36539 main.go:141] libmachine: (ha-168758) Creating domain...
	I1204 23:55:55.305552   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:55.305567   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:55:55.305572   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:55:55.305581   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:55:55.305588   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home
	I1204 23:55:55.305594   36539 main.go:141] libmachine: (ha-168758) DBG | Skipping /home - not owner
	I1204 23:55:55.306542   36539 main.go:141] libmachine: (ha-168758) define libvirt domain using xml: 
	I1204 23:55:55.306560   36539 main.go:141] libmachine: (ha-168758) <domain type='kvm'>
	I1204 23:55:55.306569   36539 main.go:141] libmachine: (ha-168758)   <name>ha-168758</name>
	I1204 23:55:55.306577   36539 main.go:141] libmachine: (ha-168758)   <memory unit='MiB'>2200</memory>
	I1204 23:55:55.306585   36539 main.go:141] libmachine: (ha-168758)   <vcpu>2</vcpu>
	I1204 23:55:55.306593   36539 main.go:141] libmachine: (ha-168758)   <features>
	I1204 23:55:55.306598   36539 main.go:141] libmachine: (ha-168758)     <acpi/>
	I1204 23:55:55.306605   36539 main.go:141] libmachine: (ha-168758)     <apic/>
	I1204 23:55:55.306610   36539 main.go:141] libmachine: (ha-168758)     <pae/>
	I1204 23:55:55.306624   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.306651   36539 main.go:141] libmachine: (ha-168758)   </features>
	I1204 23:55:55.306675   36539 main.go:141] libmachine: (ha-168758)   <cpu mode='host-passthrough'>
	I1204 23:55:55.306682   36539 main.go:141] libmachine: (ha-168758)   
	I1204 23:55:55.306690   36539 main.go:141] libmachine: (ha-168758)   </cpu>
	I1204 23:55:55.306700   36539 main.go:141] libmachine: (ha-168758)   <os>
	I1204 23:55:55.306707   36539 main.go:141] libmachine: (ha-168758)     <type>hvm</type>
	I1204 23:55:55.306714   36539 main.go:141] libmachine: (ha-168758)     <boot dev='cdrom'/>
	I1204 23:55:55.306724   36539 main.go:141] libmachine: (ha-168758)     <boot dev='hd'/>
	I1204 23:55:55.306733   36539 main.go:141] libmachine: (ha-168758)     <bootmenu enable='no'/>
	I1204 23:55:55.306742   36539 main.go:141] libmachine: (ha-168758)   </os>
	I1204 23:55:55.306751   36539 main.go:141] libmachine: (ha-168758)   <devices>
	I1204 23:55:55.306763   36539 main.go:141] libmachine: (ha-168758)     <disk type='file' device='cdrom'>
	I1204 23:55:55.306773   36539 main.go:141] libmachine: (ha-168758)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/boot2docker.iso'/>
	I1204 23:55:55.306798   36539 main.go:141] libmachine: (ha-168758)       <target dev='hdc' bus='scsi'/>
	I1204 23:55:55.306817   36539 main.go:141] libmachine: (ha-168758)       <readonly/>
	I1204 23:55:55.306827   36539 main.go:141] libmachine: (ha-168758)     </disk>
	I1204 23:55:55.306836   36539 main.go:141] libmachine: (ha-168758)     <disk type='file' device='disk'>
	I1204 23:55:55.306848   36539 main.go:141] libmachine: (ha-168758)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:55:55.306863   36539 main.go:141] libmachine: (ha-168758)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/ha-168758.rawdisk'/>
	I1204 23:55:55.306877   36539 main.go:141] libmachine: (ha-168758)       <target dev='hda' bus='virtio'/>
	I1204 23:55:55.306888   36539 main.go:141] libmachine: (ha-168758)     </disk>
	I1204 23:55:55.306895   36539 main.go:141] libmachine: (ha-168758)     <interface type='network'>
	I1204 23:55:55.306907   36539 main.go:141] libmachine: (ha-168758)       <source network='mk-ha-168758'/>
	I1204 23:55:55.306917   36539 main.go:141] libmachine: (ha-168758)       <model type='virtio'/>
	I1204 23:55:55.306926   36539 main.go:141] libmachine: (ha-168758)     </interface>
	I1204 23:55:55.306935   36539 main.go:141] libmachine: (ha-168758)     <interface type='network'>
	I1204 23:55:55.306944   36539 main.go:141] libmachine: (ha-168758)       <source network='default'/>
	I1204 23:55:55.306958   36539 main.go:141] libmachine: (ha-168758)       <model type='virtio'/>
	I1204 23:55:55.306969   36539 main.go:141] libmachine: (ha-168758)     </interface>
	I1204 23:55:55.306980   36539 main.go:141] libmachine: (ha-168758)     <serial type='pty'>
	I1204 23:55:55.306986   36539 main.go:141] libmachine: (ha-168758)       <target port='0'/>
	I1204 23:55:55.306993   36539 main.go:141] libmachine: (ha-168758)     </serial>
	I1204 23:55:55.307001   36539 main.go:141] libmachine: (ha-168758)     <console type='pty'>
	I1204 23:55:55.307011   36539 main.go:141] libmachine: (ha-168758)       <target type='serial' port='0'/>
	I1204 23:55:55.307022   36539 main.go:141] libmachine: (ha-168758)     </console>
	I1204 23:55:55.307036   36539 main.go:141] libmachine: (ha-168758)     <rng model='virtio'>
	I1204 23:55:55.307049   36539 main.go:141] libmachine: (ha-168758)       <backend model='random'>/dev/random</backend>
	I1204 23:55:55.307058   36539 main.go:141] libmachine: (ha-168758)     </rng>
	I1204 23:55:55.307065   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.307074   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.307081   36539 main.go:141] libmachine: (ha-168758)   </devices>
	I1204 23:55:55.307091   36539 main.go:141] libmachine: (ha-168758) </domain>
	I1204 23:55:55.307100   36539 main.go:141] libmachine: (ha-168758) 
	I1204 23:55:55.311286   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:cd:a8:a7 in network default
	I1204 23:55:55.311801   36539 main.go:141] libmachine: (ha-168758) Ensuring networks are active...
	I1204 23:55:55.311821   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:55.312494   36539 main.go:141] libmachine: (ha-168758) Ensuring network default is active
	I1204 23:55:55.312741   36539 main.go:141] libmachine: (ha-168758) Ensuring network mk-ha-168758 is active
	I1204 23:55:55.313181   36539 main.go:141] libmachine: (ha-168758) Getting domain xml...
	I1204 23:55:55.313756   36539 main.go:141] libmachine: (ha-168758) Creating domain...
	I1204 23:55:56.499396   36539 main.go:141] libmachine: (ha-168758) Waiting to get IP...
	I1204 23:55:56.500076   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:56.500499   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:56.500544   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:56.500496   36562 retry.go:31] will retry after 269.735879ms: waiting for machine to come up
	I1204 23:55:56.772123   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:56.772694   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:56.772717   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:56.772654   36562 retry.go:31] will retry after 276.504012ms: waiting for machine to come up
	I1204 23:55:57.051105   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:57.051610   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:57.051639   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:57.051569   36562 retry.go:31] will retry after 401.653592ms: waiting for machine to come up
	I1204 23:55:57.455271   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:57.455618   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:57.455645   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:57.455573   36562 retry.go:31] will retry after 574.501264ms: waiting for machine to come up
	I1204 23:55:58.031181   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:58.031549   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:58.031571   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:58.031508   36562 retry.go:31] will retry after 481.845874ms: waiting for machine to come up
	I1204 23:55:58.515212   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:58.515533   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:58.515555   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:58.515482   36562 retry.go:31] will retry after 665.524951ms: waiting for machine to come up
	I1204 23:55:59.182393   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:59.182797   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:59.182828   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:59.182761   36562 retry.go:31] will retry after 904.131177ms: waiting for machine to come up
	I1204 23:56:00.088207   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:00.088652   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:00.088679   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:00.088622   36562 retry.go:31] will retry after 1.17453888s: waiting for machine to come up
	I1204 23:56:01.265172   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:01.265600   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:01.265628   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:01.265549   36562 retry.go:31] will retry after 1.695149003s: waiting for machine to come up
	I1204 23:56:02.963009   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:02.963469   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:02.963496   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:02.963425   36562 retry.go:31] will retry after 1.502018093s: waiting for machine to come up
	I1204 23:56:04.467087   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:04.467517   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:04.467540   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:04.467461   36562 retry.go:31] will retry after 2.001332834s: waiting for machine to come up
	I1204 23:56:06.470836   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:06.471294   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:06.471318   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:06.471260   36562 retry.go:31] will retry after 3.491074143s: waiting for machine to come up
	I1204 23:56:09.963576   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:09.964086   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:09.964117   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:09.964017   36562 retry.go:31] will retry after 3.362007605s: waiting for machine to come up
	I1204 23:56:13.329557   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:13.329894   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:13.329919   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:13.329871   36562 retry.go:31] will retry after 3.508639528s: waiting for machine to come up
	I1204 23:56:16.841192   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.841632   36539 main.go:141] libmachine: (ha-168758) Found IP for machine: 192.168.39.246
	I1204 23:56:16.841646   36539 main.go:141] libmachine: (ha-168758) Reserving static IP address...
	I1204 23:56:16.841659   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has current primary IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.842065   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find host DHCP lease matching {name: "ha-168758", mac: "52:54:00:e5:a0:85", ip: "192.168.39.246"} in network mk-ha-168758
	I1204 23:56:16.911192   36539 main.go:141] libmachine: (ha-168758) DBG | Getting to WaitForSSH function...
	I1204 23:56:16.911218   36539 main.go:141] libmachine: (ha-168758) Reserved static IP address: 192.168.39.246
	I1204 23:56:16.911227   36539 main.go:141] libmachine: (ha-168758) Waiting for SSH to be available...
	I1204 23:56:16.913359   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.913692   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:16.913721   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.913806   36539 main.go:141] libmachine: (ha-168758) DBG | Using SSH client type: external
	I1204 23:56:16.913831   36539 main.go:141] libmachine: (ha-168758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa (-rw-------)
	I1204 23:56:16.913874   36539 main.go:141] libmachine: (ha-168758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:56:16.913895   36539 main.go:141] libmachine: (ha-168758) DBG | About to run SSH command:
	I1204 23:56:16.913909   36539 main.go:141] libmachine: (ha-168758) DBG | exit 0
	I1204 23:56:17.042024   36539 main.go:141] libmachine: (ha-168758) DBG | SSH cmd err, output: <nil>: 
	I1204 23:56:17.042323   36539 main.go:141] libmachine: (ha-168758) KVM machine creation complete!
	I1204 23:56:17.042711   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:56:17.043239   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:17.043439   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:17.043673   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:56:17.043690   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:17.045108   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:56:17.045123   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:56:17.045129   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:56:17.045137   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.047586   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.047979   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.048020   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.048171   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.048340   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.048499   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.048610   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.048758   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.048940   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.048952   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:56:17.161079   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:56:17.161103   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:56:17.161111   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.163996   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.164353   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.164381   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.164502   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.164678   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.164819   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.164953   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.165084   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.165251   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.165266   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:56:17.278333   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:56:17.278413   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:56:17.278424   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:56:17.278431   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.278638   36539 buildroot.go:166] provisioning hostname "ha-168758"
	I1204 23:56:17.278665   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.278846   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.281326   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.281670   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.281694   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.281868   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.282066   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.282192   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.282295   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.282427   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.282619   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.282636   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758 && echo "ha-168758" | sudo tee /etc/hostname
	I1204 23:56:17.407414   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758
	
	I1204 23:56:17.407442   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.410140   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.410515   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.410537   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.410696   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.410903   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.411049   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.411179   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.411328   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.411496   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.411510   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:56:17.529969   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:56:17.530000   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:56:17.530033   36539 buildroot.go:174] setting up certificates
	I1204 23:56:17.530051   36539 provision.go:84] configureAuth start
	I1204 23:56:17.530060   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.530320   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:17.533080   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.533538   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.533566   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.533710   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.535938   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.536275   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.536304   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.536461   36539 provision.go:143] copyHostCerts
	I1204 23:56:17.536485   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:56:17.536522   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:56:17.536530   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:56:17.536603   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:56:17.536688   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:56:17.536713   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:56:17.536717   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:56:17.536740   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:56:17.536823   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:56:17.536843   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:56:17.536847   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:56:17.536869   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:56:17.536930   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758 san=[127.0.0.1 192.168.39.246 ha-168758 localhost minikube]
	I1204 23:56:17.711262   36539 provision.go:177] copyRemoteCerts
	I1204 23:56:17.711322   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:56:17.711342   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.714054   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.714497   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.714526   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.714715   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.714918   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.715121   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.715311   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:17.799635   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:56:17.799713   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:56:17.821716   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:56:17.821774   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 23:56:17.843183   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:56:17.843274   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:56:17.864534   36539 provision.go:87] duration metric: took 334.473115ms to configureAuth
	I1204 23:56:17.864557   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:56:17.864710   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:17.864770   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.867187   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.867580   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.867600   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.867756   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.867942   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.868106   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.868250   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.868399   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.868552   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.868568   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:56:18.094883   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:56:18.094906   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:56:18.094924   36539 main.go:141] libmachine: (ha-168758) Calling .GetURL
	I1204 23:56:18.096038   36539 main.go:141] libmachine: (ha-168758) DBG | Using libvirt version 6000000
	I1204 23:56:18.097970   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.098244   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.098281   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.098435   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:56:18.098456   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:56:18.098466   36539 client.go:171] duration metric: took 23.320611985s to LocalClient.Create
	I1204 23:56:18.098496   36539 start.go:167] duration metric: took 23.320681504s to libmachine.API.Create "ha-168758"
	I1204 23:56:18.098512   36539 start.go:293] postStartSetup for "ha-168758" (driver="kvm2")
	I1204 23:56:18.098543   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:56:18.098564   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.098810   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:56:18.098838   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.100934   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.101269   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.101300   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.101384   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.101557   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.101728   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.101868   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.187794   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:56:18.191682   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:56:18.191702   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:56:18.191762   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:56:18.191831   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:56:18.191844   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:56:18.191928   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:56:18.200599   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:56:18.221903   36539 start.go:296] duration metric: took 123.376466ms for postStartSetup
	I1204 23:56:18.221953   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:56:18.222479   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:18.225322   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.225603   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.225629   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.225822   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:18.226039   36539 start.go:128] duration metric: took 23.465963847s to createHost
	I1204 23:56:18.226063   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.227948   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.228242   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.228279   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.228403   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.228611   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.228757   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.228867   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.229008   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:18.229167   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:18.229181   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:56:18.342221   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356578.318371600
	
	I1204 23:56:18.342254   36539 fix.go:216] guest clock: 1733356578.318371600
	I1204 23:56:18.342262   36539 fix.go:229] Guest: 2024-12-04 23:56:18.3183716 +0000 UTC Remote: 2024-12-04 23:56:18.226051425 +0000 UTC m=+23.570737200 (delta=92.320175ms)
	I1204 23:56:18.342279   36539 fix.go:200] guest clock delta is within tolerance: 92.320175ms
	I1204 23:56:18.342284   36539 start.go:83] releasing machines lock for "ha-168758", held for 23.582292191s
	I1204 23:56:18.342302   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.342572   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:18.345220   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.345537   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.345566   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.345727   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346174   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346341   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346431   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:56:18.346466   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.346494   36539 ssh_runner.go:195] Run: cat /version.json
	I1204 23:56:18.346512   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.348977   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349188   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349347   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.349382   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349542   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.349563   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349567   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.349682   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.349723   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.349810   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.349858   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.349927   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.349985   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.350095   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.458516   36539 ssh_runner.go:195] Run: systemctl --version
	I1204 23:56:18.464121   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:56:18.616560   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:56:18.621973   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:56:18.622057   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:56:18.637008   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:56:18.637037   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:56:18.637115   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:56:18.652474   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:56:18.665368   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:56:18.665440   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:56:18.678285   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:56:18.691433   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:56:18.809520   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:56:18.976690   36539 docker.go:233] disabling docker service ...
	I1204 23:56:18.976760   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:56:18.990927   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:56:19.002936   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:56:19.109515   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:56:19.215051   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:56:19.228643   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:56:19.245611   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:56:19.245668   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.255147   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:56:19.255206   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.264826   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.274125   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.283553   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:56:19.293134   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.302638   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.318465   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.327811   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:56:19.336218   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:56:19.336258   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:56:19.347830   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:56:19.356252   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:56:19.465000   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:56:19.556969   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:56:19.557036   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:56:19.561493   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:56:19.561547   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:56:19.564952   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:56:19.603625   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:56:19.603715   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:56:19.633677   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:56:19.660728   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:56:19.661971   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:19.664503   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:19.664856   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:19.664875   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:19.665065   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:56:19.668903   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:56:19.681192   36539 kubeadm.go:883] updating cluster {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:56:19.681288   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:56:19.681341   36539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:56:19.712021   36539 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 23:56:19.712076   36539 ssh_runner.go:195] Run: which lz4
	I1204 23:56:19.715771   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1204 23:56:19.715868   36539 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 23:56:19.719604   36539 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 23:56:19.719630   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 23:56:20.924241   36539 crio.go:462] duration metric: took 1.208402192s to copy over tarball
	I1204 23:56:20.924314   36539 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 23:56:22.898491   36539 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.974144098s)
	I1204 23:56:22.898524   36539 crio.go:469] duration metric: took 1.974249374s to extract the tarball
	I1204 23:56:22.898533   36539 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 23:56:22.934474   36539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:56:22.976510   36539 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:56:22.976531   36539 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:56:22.976539   36539 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.2 crio true true} ...
	I1204 23:56:22.976632   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:56:22.976690   36539 ssh_runner.go:195] Run: crio config
	I1204 23:56:23.020705   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:56:23.020725   36539 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 23:56:23.020733   36539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:56:23.020757   36539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-168758 NodeName:ha-168758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:56:23.020887   36539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-168758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:56:23.020918   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:56:23.020968   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:56:23.036384   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:56:23.036487   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:56:23.036538   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:56:23.045918   36539 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:56:23.045983   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 23:56:23.054649   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 23:56:23.069725   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:56:23.085152   36539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 23:56:23.100420   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1204 23:56:23.115477   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:56:23.118939   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:56:23.130202   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:56:23.249727   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:56:23.265905   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.246
	I1204 23:56:23.265923   36539 certs.go:194] generating shared ca certs ...
	I1204 23:56:23.265936   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.266102   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:56:23.266137   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:56:23.266147   36539 certs.go:256] generating profile certs ...
	I1204 23:56:23.266191   36539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:56:23.266210   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt with IP's: []
	I1204 23:56:23.454121   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt ...
	I1204 23:56:23.454147   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt: {Name:mk99d8a201f862390e96b084b532473bf8d8aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.454316   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key ...
	I1204 23:56:23.454328   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key: {Name:mkfdc708535eba27b71f019b2247ebd8ebc65dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.454403   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a
	I1204 23:56:23.454418   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I1204 23:56:23.571650   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a ...
	I1204 23:56:23.571677   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a: {Name:mk2304c3cee7d491c51de92409bc18bd5dd4ae6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.571829   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a ...
	I1204 23:56:23.571841   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a: {Name:mkd75785818ce3fa6f4d0786b8ae41ba1de41627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.571908   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:56:23.572007   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:56:23.572063   36539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:56:23.572077   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt with IP's: []
	I1204 23:56:23.673268   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt ...
	I1204 23:56:23.673293   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt: {Name:mkd46a8ffbcb119ca858529996fc89416032f07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.673434   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key ...
	I1204 23:56:23.673444   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key: {Name:mk7fc42a81c367eaf7af5377c2050677c25e0a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.673507   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:56:23.673523   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:56:23.673534   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:56:23.673547   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:56:23.673561   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:56:23.673574   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:56:23.673586   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:56:23.673598   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:56:23.673642   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:56:23.673674   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:56:23.673683   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:56:23.673707   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:56:23.673729   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:56:23.673751   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:56:23.673787   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:56:23.673812   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.673825   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.673837   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:23.674359   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:56:23.698874   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:56:23.720936   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:56:23.742507   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:56:23.763840   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 23:56:23.787615   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:56:23.809303   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:56:23.830746   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:56:23.852712   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:56:23.874053   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:56:23.895302   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:56:23.916675   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:56:23.931879   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:56:23.937416   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:56:23.947727   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.951899   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.951951   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.957401   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:56:23.967356   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:56:23.977327   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.981332   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.981387   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.986550   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:56:23.996547   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:56:24.006567   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.010520   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.010574   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.015659   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:56:24.028404   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:56:24.032403   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:56:24.032453   36539 kubeadm.go:392] StartCluster: {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:56:24.032536   36539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:56:24.032593   36539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:56:24.081213   36539 cri.go:89] found id: ""
	I1204 23:56:24.081287   36539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:56:24.095063   36539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:56:24.104106   36539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:56:24.113190   36539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:56:24.113206   36539 kubeadm.go:157] found existing configuration files:
	
	I1204 23:56:24.113242   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:56:24.121657   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:56:24.121703   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:56:24.130638   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:56:24.139019   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:56:24.139075   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:56:24.147703   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:56:24.156160   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:56:24.156205   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:56:24.165072   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:56:24.173473   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:56:24.173525   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:56:24.182301   36539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 23:56:24.376697   36539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:56:34.579933   36539 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:56:34.580018   36539 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:56:34.580122   36539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:56:34.580249   36539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:56:34.580403   36539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:56:34.580494   36539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:56:34.582178   36539 out.go:235]   - Generating certificates and keys ...
	I1204 23:56:34.582281   36539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:56:34.582355   36539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:56:34.582417   36539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:56:34.582467   36539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:56:34.582518   36539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:56:34.582564   36539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:56:34.582636   36539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:56:34.582772   36539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-168758 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1204 23:56:34.582829   36539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:56:34.582931   36539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-168758 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1204 23:56:34.582990   36539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:56:34.583041   36539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:56:34.583090   36539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:56:34.583139   36539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:56:34.583188   36539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:56:34.583239   36539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:56:34.583290   36539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:56:34.583344   36539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:56:34.583394   36539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:56:34.583463   36539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:56:34.583529   36539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:56:34.585082   36539 out.go:235]   - Booting up control plane ...
	I1204 23:56:34.585162   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:56:34.585225   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:56:34.585285   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:56:34.585417   36539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:56:34.585527   36539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:56:34.585583   36539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:56:34.585716   36539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:56:34.585831   36539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:56:34.585888   36539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.206497ms
	I1204 23:56:34.585961   36539 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:56:34.586024   36539 kubeadm.go:310] [api-check] The API server is healthy after 5.989146948s
	I1204 23:56:34.586115   36539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:56:34.586292   36539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:56:34.586380   36539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:56:34.586600   36539 kubeadm.go:310] [mark-control-plane] Marking the node ha-168758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:56:34.586682   36539 kubeadm.go:310] [bootstrap-token] Using token: xmluo7.j7yx8f6chysik4ve
	I1204 23:56:34.588072   36539 out.go:235]   - Configuring RBAC rules ...
	I1204 23:56:34.588170   36539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:56:34.588259   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:56:34.588422   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:56:34.588567   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:56:34.588682   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:56:34.588762   36539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:56:34.588863   36539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:56:34.588922   36539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:56:34.588962   36539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:56:34.588968   36539 kubeadm.go:310] 
	I1204 23:56:34.589020   36539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:56:34.589026   36539 kubeadm.go:310] 
	I1204 23:56:34.589098   36539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:56:34.589104   36539 kubeadm.go:310] 
	I1204 23:56:34.589125   36539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:56:34.589179   36539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:56:34.589225   36539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:56:34.589231   36539 kubeadm.go:310] 
	I1204 23:56:34.589295   36539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:56:34.589308   36539 kubeadm.go:310] 
	I1204 23:56:34.589352   36539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:56:34.589359   36539 kubeadm.go:310] 
	I1204 23:56:34.589401   36539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:56:34.589475   36539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:56:34.589570   36539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:56:34.589580   36539 kubeadm.go:310] 
	I1204 23:56:34.589656   36539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:56:34.589727   36539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:56:34.589735   36539 kubeadm.go:310] 
	I1204 23:56:34.589810   36539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xmluo7.j7yx8f6chysik4ve \
	I1204 23:56:34.589952   36539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c \
	I1204 23:56:34.589986   36539 kubeadm.go:310] 	--control-plane 
	I1204 23:56:34.589993   36539 kubeadm.go:310] 
	I1204 23:56:34.590071   36539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:56:34.590078   36539 kubeadm.go:310] 
	I1204 23:56:34.590151   36539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xmluo7.j7yx8f6chysik4ve \
	I1204 23:56:34.590255   36539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c 
	I1204 23:56:34.590265   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:56:34.590270   36539 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 23:56:34.591672   36539 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 23:56:34.592995   36539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 23:56:34.598320   36539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 23:56:34.598340   36539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 23:56:34.616335   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 23:56:34.967246   36539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:56:34.967343   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:34.967389   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758 minikube.k8s.io/updated_at=2024_12_04T23_56_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=true
	I1204 23:56:35.009341   36539 ops.go:34] apiserver oom_adj: -16
	I1204 23:56:35.191577   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:35.691603   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:36.192181   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:36.691747   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.192577   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.691663   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.837937   36539 kubeadm.go:1113] duration metric: took 2.870656691s to wait for elevateKubeSystemPrivileges
	I1204 23:56:37.837999   36539 kubeadm.go:394] duration metric: took 13.80554404s to StartCluster
	I1204 23:56:37.838021   36539 settings.go:142] acquiring lock: {Name:mkdafc0119272d33e86aa3cd16389b126b5518c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:37.838113   36539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:56:37.838945   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/kubeconfig: {Name:mkdcd2310e2ff57e3c85e12df202da266115cf1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:37.839186   36539 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:56:37.839215   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:56:37.839197   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:56:37.839216   36539 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 23:56:37.839321   36539 addons.go:69] Setting storage-provisioner=true in profile "ha-168758"
	I1204 23:56:37.839348   36539 addons.go:234] Setting addon storage-provisioner=true in "ha-168758"
	I1204 23:56:37.839351   36539 addons.go:69] Setting default-storageclass=true in profile "ha-168758"
	I1204 23:56:37.839375   36539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-168758"
	I1204 23:56:37.839380   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:37.839384   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:56:37.839759   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.839800   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.839824   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.839863   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.854550   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1204 23:56:37.854839   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1204 23:56:37.854964   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.855259   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.855493   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.855517   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.855736   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.855762   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.855853   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.855980   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.856068   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.856674   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.856721   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.858200   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:56:37.858527   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 23:56:37.859020   36539 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 23:56:37.859271   36539 addons.go:234] Setting addon default-storageclass=true in "ha-168758"
	I1204 23:56:37.859308   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:56:37.859664   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.859703   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.872166   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I1204 23:56:37.872824   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.873354   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.873383   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.873694   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.873898   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.873955   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
	I1204 23:56:37.874272   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.874700   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.874726   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.875097   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.875640   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:37.875644   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.875720   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.877494   36539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:56:37.878808   36539 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:56:37.878822   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:56:37.878836   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:37.882048   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.882510   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:37.882542   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.882617   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:37.882791   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:37.882930   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:37.883065   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:37.890703   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1204 23:56:37.891111   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.891600   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.891624   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.891905   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.892126   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.893659   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:37.893845   36539 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:56:37.893861   36539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:56:37.893876   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:37.896603   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.897013   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:37.897035   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.897191   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:37.897338   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:37.897461   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:37.897562   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:37.981170   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:56:38.100674   36539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:56:38.124529   36539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:56:38.478625   36539 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 23:56:38.478692   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.478710   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.478984   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.479004   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.479014   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.479022   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.479240   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.479255   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.479326   36539 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 23:56:38.479347   36539 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 23:56:38.479442   36539 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1204 23:56:38.479452   36539 round_trippers.go:469] Request Headers:
	I1204 23:56:38.479462   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:56:38.479469   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:56:38.494926   36539 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1204 23:56:38.495499   36539 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1204 23:56:38.495513   36539 round_trippers.go:469] Request Headers:
	I1204 23:56:38.495520   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:56:38.495529   36539 round_trippers.go:473]     Content-Type: application/json
	I1204 23:56:38.495532   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:56:38.504036   36539 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1204 23:56:38.504185   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.504196   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.504490   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.504510   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.504543   36539 main.go:141] libmachine: (ha-168758) DBG | Closing plugin on server side
	I1204 23:56:38.728533   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.728559   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.728849   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.728876   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.728885   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.728893   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.729129   36539 main.go:141] libmachine: (ha-168758) DBG | Closing plugin on server side
	I1204 23:56:38.729202   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.729219   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.730946   36539 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1204 23:56:38.732198   36539 addons.go:510] duration metric: took 892.982269ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1204 23:56:38.732240   36539 start.go:246] waiting for cluster config update ...
	I1204 23:56:38.732263   36539 start.go:255] writing updated cluster config ...
	I1204 23:56:38.733842   36539 out.go:201] 
	I1204 23:56:38.735235   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:38.735344   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:38.736922   36539 out.go:177] * Starting "ha-168758-m02" control-plane node in "ha-168758" cluster
	I1204 23:56:38.738175   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:56:38.738201   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:56:38.738327   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:56:38.738348   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:56:38.738442   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:38.738692   36539 start.go:360] acquireMachinesLock for ha-168758-m02: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:56:38.738753   36539 start.go:364] duration metric: took 32.863µs to acquireMachinesLock for "ha-168758-m02"
	I1204 23:56:38.738778   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:56:38.738869   36539 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1204 23:56:38.740536   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:56:38.740652   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:38.740698   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:38.755586   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I1204 23:56:38.756099   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:38.756624   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:38.756663   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:38.757003   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:38.757205   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:56:38.757343   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:56:38.757484   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:56:38.757510   36539 client.go:168] LocalClient.Create starting
	I1204 23:56:38.757536   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:56:38.757571   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:56:38.757584   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:56:38.757629   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:56:38.757661   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:56:38.757671   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:56:38.757684   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:56:38.757692   36539 main.go:141] libmachine: (ha-168758-m02) Calling .PreCreateCheck
	I1204 23:56:38.757862   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:56:38.758275   36539 main.go:141] libmachine: Creating machine...
	I1204 23:56:38.758289   36539 main.go:141] libmachine: (ha-168758-m02) Calling .Create
	I1204 23:56:38.758443   36539 main.go:141] libmachine: (ha-168758-m02) Creating KVM machine...
	I1204 23:56:38.759771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found existing default KVM network
	I1204 23:56:38.759905   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found existing private KVM network mk-ha-168758
	I1204 23:56:38.760010   36539 main.go:141] libmachine: (ha-168758-m02) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 ...
	I1204 23:56:38.760033   36539 main.go:141] libmachine: (ha-168758-m02) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:56:38.760144   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:38.760017   36906 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:56:38.760207   36539 main.go:141] libmachine: (ha-168758-m02) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:56:39.002966   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.002856   36906 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa...
	I1204 23:56:39.068744   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.068644   36906 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/ha-168758-m02.rawdisk...
	I1204 23:56:39.068771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Writing magic tar header
	I1204 23:56:39.068781   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Writing SSH key tar header
	I1204 23:56:39.068788   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.068754   36906 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 ...
	I1204 23:56:39.068880   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02
	I1204 23:56:39.068913   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 (perms=drwx------)
	I1204 23:56:39.068925   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:56:39.068946   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:56:39.068960   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:56:39.068973   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:56:39.068987   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:56:39.068997   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:56:39.069012   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:56:39.069021   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home
	I1204 23:56:39.069037   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Skipping /home - not owner
	I1204 23:56:39.069073   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:56:39.069102   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:56:39.069123   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:56:39.069137   36539 main.go:141] libmachine: (ha-168758-m02) Creating domain...
	I1204 23:56:39.070045   36539 main.go:141] libmachine: (ha-168758-m02) define libvirt domain using xml: 
	I1204 23:56:39.070076   36539 main.go:141] libmachine: (ha-168758-m02) <domain type='kvm'>
	I1204 23:56:39.070100   36539 main.go:141] libmachine: (ha-168758-m02)   <name>ha-168758-m02</name>
	I1204 23:56:39.070118   36539 main.go:141] libmachine: (ha-168758-m02)   <memory unit='MiB'>2200</memory>
	I1204 23:56:39.070139   36539 main.go:141] libmachine: (ha-168758-m02)   <vcpu>2</vcpu>
	I1204 23:56:39.070157   36539 main.go:141] libmachine: (ha-168758-m02)   <features>
	I1204 23:56:39.070172   36539 main.go:141] libmachine: (ha-168758-m02)     <acpi/>
	I1204 23:56:39.070188   36539 main.go:141] libmachine: (ha-168758-m02)     <apic/>
	I1204 23:56:39.070199   36539 main.go:141] libmachine: (ha-168758-m02)     <pae/>
	I1204 23:56:39.070210   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070233   36539 main.go:141] libmachine: (ha-168758-m02)   </features>
	I1204 23:56:39.070244   36539 main.go:141] libmachine: (ha-168758-m02)   <cpu mode='host-passthrough'>
	I1204 23:56:39.070251   36539 main.go:141] libmachine: (ha-168758-m02)   
	I1204 23:56:39.070266   36539 main.go:141] libmachine: (ha-168758-m02)   </cpu>
	I1204 23:56:39.070275   36539 main.go:141] libmachine: (ha-168758-m02)   <os>
	I1204 23:56:39.070282   36539 main.go:141] libmachine: (ha-168758-m02)     <type>hvm</type>
	I1204 23:56:39.070293   36539 main.go:141] libmachine: (ha-168758-m02)     <boot dev='cdrom'/>
	I1204 23:56:39.070303   36539 main.go:141] libmachine: (ha-168758-m02)     <boot dev='hd'/>
	I1204 23:56:39.070312   36539 main.go:141] libmachine: (ha-168758-m02)     <bootmenu enable='no'/>
	I1204 23:56:39.070322   36539 main.go:141] libmachine: (ha-168758-m02)   </os>
	I1204 23:56:39.070334   36539 main.go:141] libmachine: (ha-168758-m02)   <devices>
	I1204 23:56:39.070343   36539 main.go:141] libmachine: (ha-168758-m02)     <disk type='file' device='cdrom'>
	I1204 23:56:39.070361   36539 main.go:141] libmachine: (ha-168758-m02)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/boot2docker.iso'/>
	I1204 23:56:39.070369   36539 main.go:141] libmachine: (ha-168758-m02)       <target dev='hdc' bus='scsi'/>
	I1204 23:56:39.070376   36539 main.go:141] libmachine: (ha-168758-m02)       <readonly/>
	I1204 23:56:39.070387   36539 main.go:141] libmachine: (ha-168758-m02)     </disk>
	I1204 23:56:39.070396   36539 main.go:141] libmachine: (ha-168758-m02)     <disk type='file' device='disk'>
	I1204 23:56:39.070401   36539 main.go:141] libmachine: (ha-168758-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:56:39.070412   36539 main.go:141] libmachine: (ha-168758-m02)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/ha-168758-m02.rawdisk'/>
	I1204 23:56:39.070427   36539 main.go:141] libmachine: (ha-168758-m02)       <target dev='hda' bus='virtio'/>
	I1204 23:56:39.070435   36539 main.go:141] libmachine: (ha-168758-m02)     </disk>
	I1204 23:56:39.070439   36539 main.go:141] libmachine: (ha-168758-m02)     <interface type='network'>
	I1204 23:56:39.070447   36539 main.go:141] libmachine: (ha-168758-m02)       <source network='mk-ha-168758'/>
	I1204 23:56:39.070452   36539 main.go:141] libmachine: (ha-168758-m02)       <model type='virtio'/>
	I1204 23:56:39.070459   36539 main.go:141] libmachine: (ha-168758-m02)     </interface>
	I1204 23:56:39.070463   36539 main.go:141] libmachine: (ha-168758-m02)     <interface type='network'>
	I1204 23:56:39.070468   36539 main.go:141] libmachine: (ha-168758-m02)       <source network='default'/>
	I1204 23:56:39.070475   36539 main.go:141] libmachine: (ha-168758-m02)       <model type='virtio'/>
	I1204 23:56:39.070479   36539 main.go:141] libmachine: (ha-168758-m02)     </interface>
	I1204 23:56:39.070483   36539 main.go:141] libmachine: (ha-168758-m02)     <serial type='pty'>
	I1204 23:56:39.070500   36539 main.go:141] libmachine: (ha-168758-m02)       <target port='0'/>
	I1204 23:56:39.070515   36539 main.go:141] libmachine: (ha-168758-m02)     </serial>
	I1204 23:56:39.070526   36539 main.go:141] libmachine: (ha-168758-m02)     <console type='pty'>
	I1204 23:56:39.070533   36539 main.go:141] libmachine: (ha-168758-m02)       <target type='serial' port='0'/>
	I1204 23:56:39.070539   36539 main.go:141] libmachine: (ha-168758-m02)     </console>
	I1204 23:56:39.070547   36539 main.go:141] libmachine: (ha-168758-m02)     <rng model='virtio'>
	I1204 23:56:39.070561   36539 main.go:141] libmachine: (ha-168758-m02)       <backend model='random'>/dev/random</backend>
	I1204 23:56:39.070568   36539 main.go:141] libmachine: (ha-168758-m02)     </rng>
	I1204 23:56:39.070579   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070586   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070604   36539 main.go:141] libmachine: (ha-168758-m02)   </devices>
	I1204 23:56:39.070621   36539 main.go:141] libmachine: (ha-168758-m02) </domain>
	I1204 23:56:39.070635   36539 main.go:141] libmachine: (ha-168758-m02) 
	I1204 23:56:39.077113   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:a2:74:ea in network default
	I1204 23:56:39.077678   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring networks are active...
	I1204 23:56:39.077701   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:39.078445   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring network default is active
	I1204 23:56:39.078865   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring network mk-ha-168758 is active
	I1204 23:56:39.079237   36539 main.go:141] libmachine: (ha-168758-m02) Getting domain xml...
	I1204 23:56:39.079930   36539 main.go:141] libmachine: (ha-168758-m02) Creating domain...
	I1204 23:56:40.265017   36539 main.go:141] libmachine: (ha-168758-m02) Waiting to get IP...
	I1204 23:56:40.265754   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.266145   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.266182   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.266132   36906 retry.go:31] will retry after 272.630461ms: waiting for machine to come up
	I1204 23:56:40.540436   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.540778   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.540807   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.540731   36906 retry.go:31] will retry after 280.679773ms: waiting for machine to come up
	I1204 23:56:40.823193   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.823606   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.823628   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.823550   36906 retry.go:31] will retry after 428.494355ms: waiting for machine to come up
	I1204 23:56:41.253121   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:41.253634   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:41.253657   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:41.253584   36906 retry.go:31] will retry after 443.923324ms: waiting for machine to come up
	I1204 23:56:41.698819   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:41.699183   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:41.699213   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:41.699129   36906 retry.go:31] will retry after 522.039893ms: waiting for machine to come up
	I1204 23:56:42.222513   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:42.223045   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:42.223075   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:42.222991   36906 retry.go:31] will retry after 778.146913ms: waiting for machine to come up
	I1204 23:56:43.002368   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:43.002725   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:43.002748   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:43.002697   36906 retry.go:31] will retry after 985.520994ms: waiting for machine to come up
	I1204 23:56:43.990537   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:43.991046   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:43.991072   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:43.991013   36906 retry.go:31] will retry after 1.065554597s: waiting for machine to come up
	I1204 23:56:45.057549   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:45.057997   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:45.058038   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:45.057954   36906 retry.go:31] will retry after 1.75203222s: waiting for machine to come up
	I1204 23:56:46.812766   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:46.813189   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:46.813218   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:46.813128   36906 retry.go:31] will retry after 1.936174375s: waiting for machine to come up
	I1204 23:56:48.752161   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:48.752595   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:48.752615   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:48.752561   36906 retry.go:31] will retry after 2.006049391s: waiting for machine to come up
	I1204 23:56:50.760948   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:50.761423   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:50.761448   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:50.761364   36906 retry.go:31] will retry after 3.635507158s: waiting for machine to come up
	I1204 23:56:54.398466   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:54.398888   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:54.398919   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:54.398858   36906 retry.go:31] will retry after 3.280789577s: waiting for machine to come up
	I1204 23:56:57.683343   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:57.683888   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:57.683924   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:57.683859   36906 retry.go:31] will retry after 5.640187241s: waiting for machine to come up
	I1204 23:57:03.329060   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.329506   36539 main.go:141] libmachine: (ha-168758-m02) Found IP for machine: 192.168.39.70
	I1204 23:57:03.329530   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has current primary IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.329536   36539 main.go:141] libmachine: (ha-168758-m02) Reserving static IP address...
	I1204 23:57:03.329911   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find host DHCP lease matching {name: "ha-168758-m02", mac: "52:54:00:e2:30:30", ip: "192.168.39.70"} in network mk-ha-168758
	I1204 23:57:03.403859   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Getting to WaitForSSH function...
	I1204 23:57:03.403880   36539 main.go:141] libmachine: (ha-168758-m02) Reserved static IP address: 192.168.39.70
	I1204 23:57:03.403891   36539 main.go:141] libmachine: (ha-168758-m02) Waiting for SSH to be available...
	I1204 23:57:03.406504   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.407026   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.407061   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.407259   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using SSH client type: external
	I1204 23:57:03.407286   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa (-rw-------)
	I1204 23:57:03.407378   36539 main.go:141] libmachine: (ha-168758-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:57:03.407404   36539 main.go:141] libmachine: (ha-168758-m02) DBG | About to run SSH command:
	I1204 23:57:03.407413   36539 main.go:141] libmachine: (ha-168758-m02) DBG | exit 0
	I1204 23:57:03.534727   36539 main.go:141] libmachine: (ha-168758-m02) DBG | SSH cmd err, output: <nil>: 
	I1204 23:57:03.535017   36539 main.go:141] libmachine: (ha-168758-m02) KVM machine creation complete!
	I1204 23:57:03.535420   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:57:03.536018   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:03.536208   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:03.536363   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:57:03.536382   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetState
	I1204 23:57:03.537483   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:57:03.537500   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:57:03.537508   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:57:03.537516   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.539751   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.540081   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.540109   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.540281   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.540440   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.540587   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.540727   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.540880   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.541129   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.541145   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:57:03.649219   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:57:03.649249   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:57:03.649261   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.652199   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.652771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.652856   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.652885   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.653079   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.653247   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.653419   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.653613   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.653828   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.653842   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:57:03.763190   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:57:03.763346   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:57:03.763361   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:57:03.763371   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:03.763602   36539 buildroot.go:166] provisioning hostname "ha-168758-m02"
	I1204 23:57:03.763632   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:03.763818   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.766973   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.767390   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.767437   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.767603   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.767818   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.768055   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.768271   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.768440   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.768709   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.768726   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758-m02 && echo "ha-168758-m02" | sudo tee /etc/hostname
	I1204 23:57:03.892672   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758-m02
	
	I1204 23:57:03.892702   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.895144   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.895422   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.895442   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.895624   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.895781   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.895901   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.896009   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.896154   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.896362   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.896378   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:57:04.014565   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:57:04.014591   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:57:04.014605   36539 buildroot.go:174] setting up certificates
	I1204 23:57:04.014613   36539 provision.go:84] configureAuth start
	I1204 23:57:04.014620   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:04.014862   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.017595   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.017892   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.017919   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.018050   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.020121   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.020411   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.020432   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.020630   36539 provision.go:143] copyHostCerts
	I1204 23:57:04.020662   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:57:04.020697   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:57:04.020725   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:57:04.020805   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:57:04.020931   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:57:04.020973   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:57:04.020984   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:57:04.021027   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:57:04.021089   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:57:04.021109   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:57:04.021118   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:57:04.021154   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:57:04.021216   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758-m02 san=[127.0.0.1 192.168.39.70 ha-168758-m02 localhost minikube]
	I1204 23:57:04.229700   36539 provision.go:177] copyRemoteCerts
	I1204 23:57:04.229780   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:57:04.229810   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.233253   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.233650   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.233681   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.233956   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.234161   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.234318   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.234434   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.316064   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:57:04.316143   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:57:04.338512   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:57:04.338585   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:57:04.360119   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:57:04.360178   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:57:04.381996   36539 provision.go:87] duration metric: took 367.37237ms to configureAuth
	I1204 23:57:04.382022   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:57:04.382234   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:04.382318   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.384736   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.385043   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.385085   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.385263   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.385456   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.385634   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.385765   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.385893   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:04.386085   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:04.386102   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:57:04.615454   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:57:04.615480   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:57:04.615488   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetURL
	I1204 23:57:04.616607   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using libvirt version 6000000
	I1204 23:57:04.618773   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.619138   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.619174   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.619325   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:57:04.619342   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:57:04.619349   36539 client.go:171] duration metric: took 25.861833034s to LocalClient.Create
	I1204 23:57:04.619371   36539 start.go:167] duration metric: took 25.86188766s to libmachine.API.Create "ha-168758"
	I1204 23:57:04.619385   36539 start.go:293] postStartSetup for "ha-168758-m02" (driver="kvm2")
	I1204 23:57:04.619400   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:57:04.619421   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.619646   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:57:04.619675   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.622076   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.622449   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.622480   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.622622   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.622800   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.622964   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.623116   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.712567   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:57:04.716652   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:57:04.716681   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:57:04.716761   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:57:04.716858   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:57:04.716869   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:57:04.716975   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:57:04.726044   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:57:04.748709   36539 start.go:296] duration metric: took 129.292597ms for postStartSetup
	I1204 23:57:04.748769   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:57:04.749436   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.751961   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.752346   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.752376   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.752616   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:04.752793   36539 start.go:128] duration metric: took 26.013914076s to createHost
	I1204 23:57:04.752815   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.755319   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.755652   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.755682   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.755827   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.755971   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.756086   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.756189   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.756307   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:04.756479   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:04.756492   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:57:04.866426   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356624.824376765
	
	I1204 23:57:04.866448   36539 fix.go:216] guest clock: 1733356624.824376765
	I1204 23:57:04.866456   36539 fix.go:229] Guest: 2024-12-04 23:57:04.824376765 +0000 UTC Remote: 2024-12-04 23:57:04.752806033 +0000 UTC m=+70.097491807 (delta=71.570732ms)
	I1204 23:57:04.866470   36539 fix.go:200] guest clock delta is within tolerance: 71.570732ms
	I1204 23:57:04.866475   36539 start.go:83] releasing machines lock for "ha-168758-m02", held for 26.12770888s
	I1204 23:57:04.866493   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.866780   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.869289   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.869690   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.869723   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.871575   36539 out.go:177] * Found network options:
	I1204 23:57:04.872861   36539 out.go:177]   - NO_PROXY=192.168.39.246
	W1204 23:57:04.874174   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:57:04.874219   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.874854   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.875047   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.875134   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:57:04.875168   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	W1204 23:57:04.875274   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:57:04.875348   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:57:04.875364   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.877779   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878165   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878255   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.878300   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878462   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.878636   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.878704   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.878726   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878789   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.878869   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.878940   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.878980   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.879169   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.879295   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:05.115514   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:57:05.121578   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:57:05.121649   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:57:05.137141   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:57:05.137166   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:57:05.137249   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:57:05.153397   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:57:05.167629   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:57:05.167697   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:57:05.180991   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:57:05.194817   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:57:05.319862   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:57:05.480267   36539 docker.go:233] disabling docker service ...
	I1204 23:57:05.480352   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:57:05.494063   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:57:05.506104   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:57:05.622700   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:57:05.745424   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:57:05.759393   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:57:05.776697   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:57:05.776762   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.787070   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:57:05.787137   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.797124   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.807348   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.816991   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:57:05.826982   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.836497   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.852423   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.862453   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:57:05.871289   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:57:05.871338   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:57:05.883727   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:57:05.893241   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:06.013483   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:57:06.107092   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:57:06.107171   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:57:06.112530   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:57:06.112612   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:57:06.116309   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:57:06.157302   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:57:06.157371   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:57:06.184889   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:57:06.212833   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:57:06.214532   36539 out.go:177]   - env NO_PROXY=192.168.39.246
	I1204 23:57:06.215684   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:06.218318   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:06.218690   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:06.218715   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:06.218889   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:57:06.222805   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:57:06.234260   36539 mustload.go:65] Loading cluster: ha-168758
	I1204 23:57:06.234431   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:06.234663   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:06.234694   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:06.249086   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I1204 23:57:06.249474   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:06.249897   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:06.249916   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:06.250208   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:06.250399   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:57:06.251961   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:57:06.252244   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:06.252279   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:06.266555   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I1204 23:57:06.267012   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:06.267500   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:06.267519   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:06.267807   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:06.267965   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:57:06.268109   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.70
	I1204 23:57:06.268124   36539 certs.go:194] generating shared ca certs ...
	I1204 23:57:06.268142   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.268293   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:57:06.268346   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:57:06.268359   36539 certs.go:256] generating profile certs ...
	I1204 23:57:06.268451   36539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:57:06.268485   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d
	I1204 23:57:06.268506   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.254]
	I1204 23:57:06.433427   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d ...
	I1204 23:57:06.433455   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d: {Name:mk9b5ab41b84365c7c87c13c8ccd0d5c3db76aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.433623   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d ...
	I1204 23:57:06.433643   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d: {Name:mk61c76c74eaed61a46e313e778d5b43d1ba813c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.433739   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:57:06.433896   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:57:06.434089   36539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:57:06.434105   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:57:06.434120   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:57:06.434140   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:57:06.434158   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:57:06.434176   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:57:06.434192   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:57:06.434208   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:57:06.434229   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:57:06.434295   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:57:06.434333   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:57:06.434347   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:57:06.434398   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:57:06.434429   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:57:06.434462   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:57:06.434512   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:57:06.434550   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:57:06.434571   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.434596   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:57:06.434632   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:57:06.437677   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:06.438091   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:57:06.438121   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:06.438323   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:57:06.438484   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:57:06.438627   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:57:06.438715   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:57:06.518370   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 23:57:06.523710   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 23:57:06.534968   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 23:57:06.539161   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1204 23:57:06.549270   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 23:57:06.553348   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 23:57:06.562722   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 23:57:06.566572   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 23:57:06.575817   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 23:57:06.579621   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 23:57:06.588791   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 23:57:06.592406   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 23:57:06.601971   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:57:06.626102   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:57:06.647746   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:57:06.670354   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:57:06.700509   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 23:57:06.722552   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:57:06.745113   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:57:06.767304   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:57:06.789718   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:57:06.812073   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:57:06.834754   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:57:06.857779   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 23:57:06.872839   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1204 23:57:06.887702   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 23:57:06.903420   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 23:57:06.918686   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 23:57:06.933715   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 23:57:06.948744   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 23:57:06.963779   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:57:06.969267   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:57:06.978970   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.982989   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.983047   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.988382   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:57:06.997926   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:57:07.007572   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.011739   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.011796   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.017095   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:57:07.027066   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:57:07.037055   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.041161   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.041220   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.046426   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:57:07.056376   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:57:07.060388   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:57:07.060436   36539 kubeadm.go:934] updating node {m02 192.168.39.70 8443 v1.31.2 crio true true} ...
	I1204 23:57:07.060522   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:57:07.060547   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:57:07.060575   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:57:07.074631   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:57:07.074712   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:57:07.074777   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:57:07.084179   36539 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 23:57:07.084224   36539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 23:57:07.093203   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 23:57:07.093229   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:57:07.093277   36539 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1204 23:57:07.093292   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:57:07.093312   36539 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1204 23:57:07.097086   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 23:57:07.097113   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 23:57:08.011623   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:57:08.011699   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:57:08.017164   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 23:57:08.017200   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 23:57:08.175961   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:57:08.200946   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:57:08.201051   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:57:08.211652   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 23:57:08.211693   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 23:57:08.563181   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 23:57:08.571736   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1204 23:57:08.586849   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:57:08.601801   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 23:57:08.616859   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:57:08.620378   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:57:08.631234   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:08.746289   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:57:08.762474   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:57:08.762951   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:08.763010   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:08.777798   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I1204 23:57:08.778308   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:08.778784   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:08.778801   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:08.779085   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:08.779279   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:57:08.779432   36539 start.go:317] joinCluster: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:57:08.779528   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 23:57:08.779552   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:57:08.782464   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:08.782947   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:57:08.782974   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:08.783150   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:57:08.783306   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:57:08.783439   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:57:08.783593   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:57:08.959692   36539 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:08.959734   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lbtrqy.cl9b45dngbh3w53w --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m02 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443"
	I1204 23:57:30.700354   36539 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lbtrqy.cl9b45dngbh3w53w --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m02 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443": (21.740590437s)
	I1204 23:57:30.700399   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 23:57:31.260769   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758-m02 minikube.k8s.io/updated_at=2024_12_04T23_57_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=false
	I1204 23:57:31.388605   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-168758-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 23:57:31.507247   36539 start.go:319] duration metric: took 22.727809032s to joinCluster
	I1204 23:57:31.507330   36539 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:31.507657   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:31.510077   36539 out.go:177] * Verifying Kubernetes components...
	I1204 23:57:31.511382   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:31.807145   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:57:31.866005   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:57:31.866366   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 23:57:31.866446   36539 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I1204 23:57:31.866697   36539 node_ready.go:35] waiting up to 6m0s for node "ha-168758-m02" to be "Ready" ...
	I1204 23:57:31.866802   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:31.866815   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:31.866826   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:31.866832   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:31.894115   36539 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1204 23:57:32.367519   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:32.367541   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:32.367548   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:32.367552   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:32.372462   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:32.867267   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:32.867297   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:32.867307   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:32.867312   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:32.870593   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:33.367165   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:33.367188   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:33.367199   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:33.367204   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:33.371115   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:33.867446   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:33.867467   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:33.867490   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:33.867496   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:33.895642   36539 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1204 23:57:33.896043   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:34.367204   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:34.367227   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:34.367236   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:34.367241   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:34.370493   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:34.867119   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:34.867142   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:34.867150   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:34.867153   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:34.871875   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:35.367013   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:35.367043   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:35.367054   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:35.367059   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:35.369963   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:35.867691   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:35.867711   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:35.867718   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:35.867722   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:35.873237   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:57:36.366914   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:36.366938   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:36.366946   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:36.366957   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:36.369646   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:36.370291   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:36.867182   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:36.867206   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:36.867216   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:36.867221   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:36.870020   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:37.367042   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:37.367063   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:37.367071   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:37.367074   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:37.370425   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:37.867722   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:37.867746   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:37.867757   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:37.867762   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:37.870832   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:38.367880   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:38.367907   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:38.367919   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:38.367925   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:38.371699   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:38.374216   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:38.866862   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:38.866882   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:38.866889   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:38.866894   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:38.870442   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:39.367314   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:39.367335   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:39.367343   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:39.367347   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:39.370397   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:39.867083   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:39.867112   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:39.867124   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:39.867131   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:39.869764   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.367183   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:40.367204   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:40.367212   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:40.367216   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:40.370122   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.866955   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:40.866977   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:40.866985   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:40.866989   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:40.869893   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.870550   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:41.367838   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:41.367867   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:41.367877   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:41.367883   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:41.371252   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:41.867051   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:41.867074   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:41.867081   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:41.867086   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:41.869735   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:42.367874   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:42.367896   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:42.367904   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:42.367908   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:42.370905   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:42.867902   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:42.867923   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:42.867930   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:42.867936   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:42.871077   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:42.871630   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:43.367020   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:43.367041   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:43.367049   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:43.367053   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:43.369800   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:43.867917   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:43.867935   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:43.867943   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:43.867948   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:43.871058   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:44.366921   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:44.366944   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:44.366952   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:44.366956   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:44.369844   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:44.867849   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:44.867871   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:44.867878   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:44.867882   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:44.871084   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:45.367038   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:45.367064   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:45.367072   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:45.367076   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:45.370136   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:45.370824   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:45.867201   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:45.867222   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:45.867230   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:45.867234   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:45.870440   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:46.367088   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:46.367111   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:46.367122   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:46.367128   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:46.370360   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:46.867494   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:46.867515   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:46.867523   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:46.867527   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:46.870477   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:47.367351   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:47.367377   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:47.367388   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:47.367395   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:47.370421   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:47.371066   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:47.867351   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:47.867371   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:47.867376   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:47.867380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:47.870561   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:48.367177   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:48.367201   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:48.367208   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:48.367212   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:48.370109   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:48.867913   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:48.867940   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:48.867951   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:48.867957   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:48.870839   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:49.367731   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:49.367752   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:49.367763   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:49.367775   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:49.371251   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:49.371965   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:49.867061   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:49.867082   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:49.867091   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:49.867097   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:49.870692   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.367727   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:50.367748   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.367756   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.367760   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.371359   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.372130   36539 node_ready.go:49] node "ha-168758-m02" has status "Ready":"True"
	I1204 23:57:50.372146   36539 node_ready.go:38] duration metric: took 18.505433084s for node "ha-168758-m02" to be "Ready" ...
	I1204 23:57:50.372154   36539 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:57:50.372222   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:50.372232   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.372239   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.372245   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.376642   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:50.381842   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.381907   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rdbhr
	I1204 23:57:50.381916   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.381922   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.381927   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.384768   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.385355   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.385369   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.385375   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.385379   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.387687   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.388085   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.388100   36539 pod_ready.go:82] duration metric: took 6.236474ms for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.388109   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.388153   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgfvm
	I1204 23:57:50.388161   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.388167   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.388170   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.390871   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.391380   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.391394   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.391400   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.391405   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.394126   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.394638   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.394652   36539 pod_ready.go:82] duration metric: took 6.538378ms for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.394659   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.394701   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758
	I1204 23:57:50.394709   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.394715   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.394720   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.397059   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.397459   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.397471   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.397478   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.397482   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.399735   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.400125   36539 pod_ready.go:93] pod "etcd-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.400139   36539 pod_ready.go:82] duration metric: took 5.474765ms for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.400148   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.400188   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m02
	I1204 23:57:50.400196   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.400202   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.400206   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.402193   36539 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 23:57:50.402740   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:50.402752   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.402760   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.402765   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.404874   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.405354   36539 pod_ready.go:93] pod "etcd-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.405371   36539 pod_ready.go:82] duration metric: took 5.218099ms for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.405382   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.568781   36539 request.go:632] Waited for 163.320423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:57:50.568837   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:57:50.568845   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.568853   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.568860   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.572195   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.768439   36539 request.go:632] Waited for 195.374469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.768509   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.768529   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.768537   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.768542   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.771909   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.772411   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.772428   36539 pod_ready.go:82] duration metric: took 367.03986ms for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.772438   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.968760   36539 request.go:632] Waited for 196.264628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:57:50.968844   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:57:50.968862   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.968875   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.968883   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.972428   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.168517   36539 request.go:632] Waited for 195.379762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.168570   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.168575   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.168582   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.168586   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.171966   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.172839   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.172864   36539 pod_ready.go:82] duration metric: took 400.419668ms for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.172878   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.368601   36539 request.go:632] Waited for 195.644185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:57:51.368673   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:57:51.368681   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.368689   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.368696   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.371786   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.568796   36539 request.go:632] Waited for 196.387928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:51.568847   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:51.568853   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.568859   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.568862   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.572174   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.572929   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.572951   36539 pod_ready.go:82] duration metric: took 400.064616ms for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.572965   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.767794   36539 request.go:632] Waited for 194.737211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:57:51.767865   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:57:51.767871   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.767885   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.767892   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.771726   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.967706   36539 request.go:632] Waited for 195.275144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.967798   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.967806   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.967816   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.967825   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.971641   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.972143   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.972162   36539 pod_ready.go:82] duration metric: took 399.188822ms for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.972176   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.168279   36539 request.go:632] Waited for 196.037931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:57:52.168357   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:57:52.168364   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.168371   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.168380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.172095   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.368165   36539 request.go:632] Waited for 195.357162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:52.368231   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:52.368236   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.368244   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.368246   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.371117   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:52.371718   36539 pod_ready.go:93] pod "kube-proxy-s57v4" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:52.371734   36539 pod_ready.go:82] duration metric: took 399.551881ms for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.371744   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.568001   36539 request.go:632] Waited for 196.176636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:57:52.568074   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:57:52.568080   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.568088   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.568092   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.571383   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.768625   36539 request.go:632] Waited for 196.402632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:52.768686   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:52.768691   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.768702   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.768706   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.772096   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.772550   36539 pod_ready.go:93] pod "kube-proxy-vjhjm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:52.772567   36539 pod_ready.go:82] duration metric: took 400.817883ms for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.772577   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.968672   36539 request.go:632] Waited for 196.025622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:57:52.968728   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:57:52.968736   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.968747   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.968753   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.972192   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.168261   36539 request.go:632] Waited for 195.422832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:53.168312   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:53.168321   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.168329   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.168332   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.171764   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.172376   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:53.172393   36539 pod_ready.go:82] duration metric: took 399.810959ms for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.172403   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.368473   36539 request.go:632] Waited for 196.003907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:57:53.368564   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:57:53.368575   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.368586   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.368598   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.371851   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.567754   36539 request.go:632] Waited for 195.293783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:53.567819   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:53.567824   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.567831   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.567836   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.570770   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:53.571309   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:53.571333   36539 pod_ready.go:82] duration metric: took 398.922957ms for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.571348   36539 pod_ready.go:39] duration metric: took 3.199181638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:57:53.571367   36539 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:57:53.571414   36539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:57:53.585872   36539 api_server.go:72] duration metric: took 22.078506409s to wait for apiserver process to appear ...
	I1204 23:57:53.585898   36539 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:57:53.585919   36539 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1204 23:57:53.591379   36539 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1204 23:57:53.591435   36539 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1204 23:57:53.591440   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.591447   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.591452   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.592362   36539 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 23:57:53.592455   36539 api_server.go:141] control plane version: v1.31.2
	I1204 23:57:53.592472   36539 api_server.go:131] duration metric: took 6.568025ms to wait for apiserver health ...
	I1204 23:57:53.592481   36539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:57:53.767827   36539 request.go:632] Waited for 175.285413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:53.767882   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:53.767888   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.767897   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.767903   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.772373   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:53.776877   36539 system_pods.go:59] 17 kube-system pods found
	I1204 23:57:53.776902   36539 system_pods.go:61] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:57:53.776908   36539 system_pods.go:61] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:57:53.776912   36539 system_pods.go:61] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:57:53.776916   36539 system_pods.go:61] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:57:53.776920   36539 system_pods.go:61] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:57:53.776924   36539 system_pods.go:61] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:57:53.776927   36539 system_pods.go:61] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:57:53.776930   36539 system_pods.go:61] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:57:53.776934   36539 system_pods.go:61] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:57:53.776937   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:57:53.776941   36539 system_pods.go:61] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:57:53.776946   36539 system_pods.go:61] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:57:53.776949   36539 system_pods.go:61] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:57:53.776954   36539 system_pods.go:61] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:57:53.776956   36539 system_pods.go:61] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:57:53.776962   36539 system_pods.go:61] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:57:53.776965   36539 system_pods.go:61] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:57:53.776972   36539 system_pods.go:74] duration metric: took 184.481749ms to wait for pod list to return data ...
	I1204 23:57:53.777015   36539 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:57:53.968124   36539 request.go:632] Waited for 191.035378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:57:53.968194   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:57:53.968199   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.968206   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.968213   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.971592   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.971802   36539 default_sa.go:45] found service account: "default"
	I1204 23:57:53.971821   36539 default_sa.go:55] duration metric: took 194.795603ms for default service account to be created ...
	I1204 23:57:53.971829   36539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:57:54.168141   36539 request.go:632] Waited for 196.235523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:54.168201   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:54.168209   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:54.168224   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:54.168233   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:54.172658   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:54.176765   36539 system_pods.go:86] 17 kube-system pods found
	I1204 23:57:54.176792   36539 system_pods.go:89] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:57:54.176798   36539 system_pods.go:89] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:57:54.176803   36539 system_pods.go:89] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:57:54.176806   36539 system_pods.go:89] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:57:54.176809   36539 system_pods.go:89] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:57:54.176813   36539 system_pods.go:89] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:57:54.176816   36539 system_pods.go:89] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:57:54.176819   36539 system_pods.go:89] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:57:54.176823   36539 system_pods.go:89] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:57:54.176835   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:57:54.176843   36539 system_pods.go:89] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:57:54.176846   36539 system_pods.go:89] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:57:54.176850   36539 system_pods.go:89] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:57:54.176853   36539 system_pods.go:89] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:57:54.176856   36539 system_pods.go:89] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:57:54.176859   36539 system_pods.go:89] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:57:54.176862   36539 system_pods.go:89] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:57:54.176867   36539 system_pods.go:126] duration metric: took 205.031013ms to wait for k8s-apps to be running ...
	I1204 23:57:54.176876   36539 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:57:54.176915   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:57:54.192123   36539 system_svc.go:56] duration metric: took 15.235714ms WaitForService to wait for kubelet
	I1204 23:57:54.192153   36539 kubeadm.go:582] duration metric: took 22.684789622s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:57:54.192178   36539 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:57:54.368510   36539 request.go:632] Waited for 176.251249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1204 23:57:54.368569   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1204 23:57:54.368574   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:54.368581   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:54.368586   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:54.371935   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:54.372940   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:57:54.372962   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:57:54.372973   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:57:54.372977   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:57:54.372982   36539 node_conditions.go:105] duration metric: took 180.798569ms to run NodePressure ...
	I1204 23:57:54.372992   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:57:54.373014   36539 start.go:255] writing updated cluster config ...
	I1204 23:57:54.375210   36539 out.go:201] 
	I1204 23:57:54.376552   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:54.376634   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:54.378003   36539 out.go:177] * Starting "ha-168758-m03" control-plane node in "ha-168758" cluster
	I1204 23:57:54.379143   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:57:54.379161   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:57:54.379243   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:57:54.379255   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:57:54.379336   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:54.379486   36539 start.go:360] acquireMachinesLock for ha-168758-m03: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:57:54.379524   36539 start.go:364] duration metric: took 21.433µs to acquireMachinesLock for "ha-168758-m03"
	I1204 23:57:54.379541   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:54.379640   36539 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1204 23:57:54.382003   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:57:54.382100   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:54.382136   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:54.396586   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I1204 23:57:54.397026   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:54.397468   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:54.397491   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:54.397753   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:54.397933   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:57:54.398074   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:57:54.398206   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:57:54.398237   36539 client.go:168] LocalClient.Create starting
	I1204 23:57:54.398271   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:57:54.398314   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:57:54.398335   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:57:54.398396   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:57:54.398425   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:57:54.398442   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:57:54.398467   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:57:54.398479   36539 main.go:141] libmachine: (ha-168758-m03) Calling .PreCreateCheck
	I1204 23:57:54.398600   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:57:54.398924   36539 main.go:141] libmachine: Creating machine...
	I1204 23:57:54.398936   36539 main.go:141] libmachine: (ha-168758-m03) Calling .Create
	I1204 23:57:54.399103   36539 main.go:141] libmachine: (ha-168758-m03) Creating KVM machine...
	I1204 23:57:54.400268   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found existing default KVM network
	I1204 23:57:54.400380   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found existing private KVM network mk-ha-168758
	I1204 23:57:54.400501   36539 main.go:141] libmachine: (ha-168758-m03) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 ...
	I1204 23:57:54.400521   36539 main.go:141] libmachine: (ha-168758-m03) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:57:54.400579   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.400496   37304 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:57:54.400687   36539 main.go:141] libmachine: (ha-168758-m03) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:57:54.638854   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.638724   37304 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa...
	I1204 23:57:54.688376   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.688260   37304 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/ha-168758-m03.rawdisk...
	I1204 23:57:54.688404   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Writing magic tar header
	I1204 23:57:54.688416   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Writing SSH key tar header
	I1204 23:57:54.688424   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.688388   37304 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 ...
	I1204 23:57:54.688510   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03
	I1204 23:57:54.688554   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 (perms=drwx------)
	I1204 23:57:54.688585   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:57:54.688596   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:57:54.688610   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:57:54.688624   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:57:54.688642   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:57:54.688652   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:57:54.688658   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:57:54.688668   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:57:54.688679   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:57:54.688694   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:57:54.688711   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home
	I1204 23:57:54.688721   36539 main.go:141] libmachine: (ha-168758-m03) Creating domain...
	I1204 23:57:54.688733   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Skipping /home - not owner
	I1204 23:57:54.689655   36539 main.go:141] libmachine: (ha-168758-m03) define libvirt domain using xml: 
	I1204 23:57:54.689672   36539 main.go:141] libmachine: (ha-168758-m03) <domain type='kvm'>
	I1204 23:57:54.689678   36539 main.go:141] libmachine: (ha-168758-m03)   <name>ha-168758-m03</name>
	I1204 23:57:54.689682   36539 main.go:141] libmachine: (ha-168758-m03)   <memory unit='MiB'>2200</memory>
	I1204 23:57:54.689687   36539 main.go:141] libmachine: (ha-168758-m03)   <vcpu>2</vcpu>
	I1204 23:57:54.689691   36539 main.go:141] libmachine: (ha-168758-m03)   <features>
	I1204 23:57:54.689715   36539 main.go:141] libmachine: (ha-168758-m03)     <acpi/>
	I1204 23:57:54.689732   36539 main.go:141] libmachine: (ha-168758-m03)     <apic/>
	I1204 23:57:54.689738   36539 main.go:141] libmachine: (ha-168758-m03)     <pae/>
	I1204 23:57:54.689742   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.689747   36539 main.go:141] libmachine: (ha-168758-m03)   </features>
	I1204 23:57:54.689752   36539 main.go:141] libmachine: (ha-168758-m03)   <cpu mode='host-passthrough'>
	I1204 23:57:54.689758   36539 main.go:141] libmachine: (ha-168758-m03)   
	I1204 23:57:54.689763   36539 main.go:141] libmachine: (ha-168758-m03)   </cpu>
	I1204 23:57:54.689771   36539 main.go:141] libmachine: (ha-168758-m03)   <os>
	I1204 23:57:54.689775   36539 main.go:141] libmachine: (ha-168758-m03)     <type>hvm</type>
	I1204 23:57:54.689783   36539 main.go:141] libmachine: (ha-168758-m03)     <boot dev='cdrom'/>
	I1204 23:57:54.689786   36539 main.go:141] libmachine: (ha-168758-m03)     <boot dev='hd'/>
	I1204 23:57:54.689799   36539 main.go:141] libmachine: (ha-168758-m03)     <bootmenu enable='no'/>
	I1204 23:57:54.689805   36539 main.go:141] libmachine: (ha-168758-m03)   </os>
	I1204 23:57:54.689812   36539 main.go:141] libmachine: (ha-168758-m03)   <devices>
	I1204 23:57:54.689817   36539 main.go:141] libmachine: (ha-168758-m03)     <disk type='file' device='cdrom'>
	I1204 23:57:54.689828   36539 main.go:141] libmachine: (ha-168758-m03)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/boot2docker.iso'/>
	I1204 23:57:54.689833   36539 main.go:141] libmachine: (ha-168758-m03)       <target dev='hdc' bus='scsi'/>
	I1204 23:57:54.689840   36539 main.go:141] libmachine: (ha-168758-m03)       <readonly/>
	I1204 23:57:54.689844   36539 main.go:141] libmachine: (ha-168758-m03)     </disk>
	I1204 23:57:54.689852   36539 main.go:141] libmachine: (ha-168758-m03)     <disk type='file' device='disk'>
	I1204 23:57:54.689857   36539 main.go:141] libmachine: (ha-168758-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:57:54.689867   36539 main.go:141] libmachine: (ha-168758-m03)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/ha-168758-m03.rawdisk'/>
	I1204 23:57:54.689875   36539 main.go:141] libmachine: (ha-168758-m03)       <target dev='hda' bus='virtio'/>
	I1204 23:57:54.689882   36539 main.go:141] libmachine: (ha-168758-m03)     </disk>
	I1204 23:57:54.689886   36539 main.go:141] libmachine: (ha-168758-m03)     <interface type='network'>
	I1204 23:57:54.689891   36539 main.go:141] libmachine: (ha-168758-m03)       <source network='mk-ha-168758'/>
	I1204 23:57:54.689898   36539 main.go:141] libmachine: (ha-168758-m03)       <model type='virtio'/>
	I1204 23:57:54.689903   36539 main.go:141] libmachine: (ha-168758-m03)     </interface>
	I1204 23:57:54.689910   36539 main.go:141] libmachine: (ha-168758-m03)     <interface type='network'>
	I1204 23:57:54.689917   36539 main.go:141] libmachine: (ha-168758-m03)       <source network='default'/>
	I1204 23:57:54.689924   36539 main.go:141] libmachine: (ha-168758-m03)       <model type='virtio'/>
	I1204 23:57:54.689928   36539 main.go:141] libmachine: (ha-168758-m03)     </interface>
	I1204 23:57:54.689935   36539 main.go:141] libmachine: (ha-168758-m03)     <serial type='pty'>
	I1204 23:57:54.689957   36539 main.go:141] libmachine: (ha-168758-m03)       <target port='0'/>
	I1204 23:57:54.689973   36539 main.go:141] libmachine: (ha-168758-m03)     </serial>
	I1204 23:57:54.689983   36539 main.go:141] libmachine: (ha-168758-m03)     <console type='pty'>
	I1204 23:57:54.689987   36539 main.go:141] libmachine: (ha-168758-m03)       <target type='serial' port='0'/>
	I1204 23:57:54.689993   36539 main.go:141] libmachine: (ha-168758-m03)     </console>
	I1204 23:57:54.689997   36539 main.go:141] libmachine: (ha-168758-m03)     <rng model='virtio'>
	I1204 23:57:54.690005   36539 main.go:141] libmachine: (ha-168758-m03)       <backend model='random'>/dev/random</backend>
	I1204 23:57:54.690009   36539 main.go:141] libmachine: (ha-168758-m03)     </rng>
	I1204 23:57:54.690013   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.690016   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.690021   36539 main.go:141] libmachine: (ha-168758-m03)   </devices>
	I1204 23:57:54.690026   36539 main.go:141] libmachine: (ha-168758-m03) </domain>
	I1204 23:57:54.690035   36539 main.go:141] libmachine: (ha-168758-m03) 
	I1204 23:57:54.696711   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:45:00:1a in network default
	I1204 23:57:54.697413   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:54.697439   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring networks are active...
	I1204 23:57:54.698233   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring network default is active
	I1204 23:57:54.698580   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring network mk-ha-168758 is active
	I1204 23:57:54.698931   36539 main.go:141] libmachine: (ha-168758-m03) Getting domain xml...
	I1204 23:57:54.699649   36539 main.go:141] libmachine: (ha-168758-m03) Creating domain...
	I1204 23:57:55.920334   36539 main.go:141] libmachine: (ha-168758-m03) Waiting to get IP...
	I1204 23:57:55.921079   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:55.921466   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:55.921501   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:55.921456   37304 retry.go:31] will retry after 209.695163ms: waiting for machine to come up
	I1204 23:57:56.132761   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.133191   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.133212   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.133145   37304 retry.go:31] will retry after 335.842051ms: waiting for machine to come up
	I1204 23:57:56.470639   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.471086   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.471135   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.471032   37304 retry.go:31] will retry after 296.078556ms: waiting for machine to come up
	I1204 23:57:56.768275   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.768757   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.768777   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.768709   37304 retry.go:31] will retry after 507.499948ms: waiting for machine to come up
	I1204 23:57:57.277223   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:57.277662   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:57.277692   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:57.277613   37304 retry.go:31] will retry after 706.137269ms: waiting for machine to come up
	I1204 23:57:57.985335   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:57.985740   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:57.985763   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:57.985694   37304 retry.go:31] will retry after 901.171563ms: waiting for machine to come up
	I1204 23:57:58.888734   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:58.889195   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:58.889217   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:58.889148   37304 retry.go:31] will retry after 1.040272258s: waiting for machine to come up
	I1204 23:57:59.931201   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:59.931634   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:59.931662   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:59.931580   37304 retry.go:31] will retry after 1.47549599s: waiting for machine to come up
	I1204 23:58:01.408702   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:01.409147   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:01.409198   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:01.409117   37304 retry.go:31] will retry after 1.529862151s: waiting for machine to come up
	I1204 23:58:02.941039   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:02.941538   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:02.941562   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:02.941502   37304 retry.go:31] will retry after 1.818025113s: waiting for machine to come up
	I1204 23:58:04.760910   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:04.761392   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:04.761424   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:04.761351   37304 retry.go:31] will retry after 1.812557198s: waiting for machine to come up
	I1204 23:58:06.576024   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:06.576523   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:06.576553   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:06.576452   37304 retry.go:31] will retry after 3.417018451s: waiting for machine to come up
	I1204 23:58:09.994797   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:09.995228   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:09.995255   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:09.995175   37304 retry.go:31] will retry after 3.708395185s: waiting for machine to come up
	I1204 23:58:13.707953   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:13.708378   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:13.708407   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:13.708331   37304 retry.go:31] will retry after 5.536922009s: waiting for machine to come up
	I1204 23:58:19.251010   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.251548   36539 main.go:141] libmachine: (ha-168758-m03) Found IP for machine: 192.168.39.211
	I1204 23:58:19.251581   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has current primary IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.251590   36539 main.go:141] libmachine: (ha-168758-m03) Reserving static IP address...
	I1204 23:58:19.252021   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find host DHCP lease matching {name: "ha-168758-m03", mac: "52:54:00:25:7a:4c", ip: "192.168.39.211"} in network mk-ha-168758
	I1204 23:58:19.326562   36539 main.go:141] libmachine: (ha-168758-m03) Reserved static IP address: 192.168.39.211
	I1204 23:58:19.326591   36539 main.go:141] libmachine: (ha-168758-m03) Waiting for SSH to be available...
	I1204 23:58:19.326600   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Getting to WaitForSSH function...
	I1204 23:58:19.329626   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.330027   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.330055   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.330234   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using SSH client type: external
	I1204 23:58:19.330259   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa (-rw-------)
	I1204 23:58:19.330287   36539 main.go:141] libmachine: (ha-168758-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:58:19.330300   36539 main.go:141] libmachine: (ha-168758-m03) DBG | About to run SSH command:
	I1204 23:58:19.330316   36539 main.go:141] libmachine: (ha-168758-m03) DBG | exit 0
	I1204 23:58:19.458033   36539 main.go:141] libmachine: (ha-168758-m03) DBG | SSH cmd err, output: <nil>: 
	I1204 23:58:19.458296   36539 main.go:141] libmachine: (ha-168758-m03) KVM machine creation complete!
	I1204 23:58:19.458639   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:58:19.459211   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:19.459418   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:19.459604   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:58:19.459621   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetState
	I1204 23:58:19.460815   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:58:19.460827   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:58:19.460832   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:58:19.460837   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.463322   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.463682   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.463709   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.463819   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.463998   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.464138   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.464299   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.464466   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.464650   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.464661   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:58:19.569758   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:19.569785   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:58:19.569794   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.573011   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.573416   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.573434   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.573531   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.573755   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.573966   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.574140   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.574339   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.574549   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.574565   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:58:19.678634   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:58:19.678713   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:58:19.678731   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:58:19.678746   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.679001   36539 buildroot.go:166] provisioning hostname "ha-168758-m03"
	I1204 23:58:19.679025   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.679187   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.681791   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.682166   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.682203   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.682354   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.682571   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.682718   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.682893   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.683080   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.683252   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.683265   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758-m03 && echo "ha-168758-m03" | sudo tee /etc/hostname
	I1204 23:58:19.805397   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758-m03
	
	I1204 23:58:19.805429   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.808431   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.808856   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.808878   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.809012   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.809177   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.809288   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.809401   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.809650   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.809863   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.809887   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:58:19.923097   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:19.923127   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:58:19.923146   36539 buildroot.go:174] setting up certificates
	I1204 23:58:19.923155   36539 provision.go:84] configureAuth start
	I1204 23:58:19.923166   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.923476   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:19.926158   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.926547   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.926572   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.926691   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.928951   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.929352   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.929383   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.929531   36539 provision.go:143] copyHostCerts
	I1204 23:58:19.929556   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:58:19.929583   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:58:19.929592   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:58:19.929653   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:58:19.929721   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:58:19.929747   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:58:19.929754   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:58:19.929777   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:58:19.929820   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:58:19.929837   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:58:19.929844   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:58:19.929863   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:58:19.929908   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758-m03 san=[127.0.0.1 192.168.39.211 ha-168758-m03 localhost minikube]
	I1204 23:58:20.121053   36539 provision.go:177] copyRemoteCerts
	I1204 23:58:20.121105   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:58:20.121126   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.123934   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.124493   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.124528   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.124773   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.124964   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.125130   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.125289   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.208033   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:58:20.208112   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:58:20.233800   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:58:20.233885   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:58:20.258548   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:58:20.258616   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:58:20.282884   36539 provision.go:87] duration metric: took 359.716273ms to configureAuth
	I1204 23:58:20.282915   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:58:20.283121   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:20.283188   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.285836   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.286252   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.286287   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.286487   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.286665   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.286810   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.287008   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.287190   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:20.287357   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:20.287370   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:58:20.518772   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:58:20.518795   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:58:20.518802   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetURL
	I1204 23:58:20.520187   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using libvirt version 6000000
	I1204 23:58:20.522653   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.523105   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.523133   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.523291   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:58:20.523307   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:58:20.523315   36539 client.go:171] duration metric: took 26.125066489s to LocalClient.Create
	I1204 23:58:20.523341   36539 start.go:167] duration metric: took 26.125137032s to libmachine.API.Create "ha-168758"
	I1204 23:58:20.523357   36539 start.go:293] postStartSetup for "ha-168758-m03" (driver="kvm2")
	I1204 23:58:20.523373   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:58:20.523396   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.523640   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:58:20.523662   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.525716   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.526081   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.526107   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.526258   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.526431   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.526593   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.526718   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.611923   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:58:20.616136   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:58:20.616164   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:58:20.616222   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:58:20.616301   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:58:20.616310   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:58:20.616393   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:58:20.625581   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:58:20.649393   36539 start.go:296] duration metric: took 126.018678ms for postStartSetup
	I1204 23:58:20.649447   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:58:20.650084   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:20.652794   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.653173   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.653203   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.653515   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:58:20.653706   36539 start.go:128] duration metric: took 26.274055056s to createHost
	I1204 23:58:20.653726   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.657371   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.657844   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.657871   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.658096   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.658293   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.658453   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.658591   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.658709   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:20.658880   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:20.658893   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:58:20.762401   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356700.741731450
	
	I1204 23:58:20.762429   36539 fix.go:216] guest clock: 1733356700.741731450
	I1204 23:58:20.762440   36539 fix.go:229] Guest: 2024-12-04 23:58:20.74173145 +0000 UTC Remote: 2024-12-04 23:58:20.653717113 +0000 UTC m=+145.998402889 (delta=88.014337ms)
	I1204 23:58:20.762459   36539 fix.go:200] guest clock delta is within tolerance: 88.014337ms
	I1204 23:58:20.762465   36539 start.go:83] releasing machines lock for "ha-168758-m03", held for 26.382931771s
	I1204 23:58:20.762482   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.762743   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:20.765725   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.766178   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.766206   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.769126   36539 out.go:177] * Found network options:
	I1204 23:58:20.770828   36539 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.70
	W1204 23:58:20.772463   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 23:58:20.772484   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:58:20.772496   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773077   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773256   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773351   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:58:20.773389   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	W1204 23:58:20.773530   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 23:58:20.773562   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:58:20.773617   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:58:20.773705   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.776558   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.776588   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.776979   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.777009   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.777099   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.777132   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.777175   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.777373   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.777397   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.777602   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.777604   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.777735   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.777749   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.777829   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:21.014947   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:58:21.021071   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:58:21.021137   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:58:21.039078   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:58:21.039110   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:58:21.039177   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:58:21.058712   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:58:21.073746   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:58:21.073808   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:58:21.087957   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:58:21.101618   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:58:21.221628   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:58:21.378405   36539 docker.go:233] disabling docker service ...
	I1204 23:58:21.378477   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:58:21.394967   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:58:21.408722   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:58:21.562596   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:58:21.684473   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:58:21.698325   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:58:21.716302   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:58:21.716372   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.726130   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:58:21.726192   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.737846   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.748038   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.758637   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:58:21.769448   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.779664   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.796293   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.806603   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:58:21.816322   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:58:21.816373   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:58:21.829466   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:58:21.839041   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:21.954707   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:58:22.043532   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:58:22.043590   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:58:22.048235   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:58:22.048287   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:58:22.051918   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:58:22.091888   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:58:22.091959   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:58:22.119246   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:58:22.147844   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:58:22.149672   36539 out.go:177]   - env NO_PROXY=192.168.39.246
	I1204 23:58:22.151366   36539 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.70
	I1204 23:58:22.152919   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:22.155788   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:22.156132   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:22.156156   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:22.156371   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:58:22.161001   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:22.172781   36539 mustload.go:65] Loading cluster: ha-168758
	I1204 23:58:22.173021   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:22.173277   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:22.173312   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:22.188201   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I1204 23:58:22.188638   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:22.189068   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:22.189086   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:22.189487   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:22.189677   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:58:22.191201   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:58:22.191497   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:22.191540   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:22.206205   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I1204 23:58:22.206662   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:22.207121   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:22.207141   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:22.207423   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:22.207583   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:58:22.207723   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.211
	I1204 23:58:22.207733   36539 certs.go:194] generating shared ca certs ...
	I1204 23:58:22.207746   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.207850   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:58:22.207885   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:58:22.207894   36539 certs.go:256] generating profile certs ...
	I1204 23:58:22.207958   36539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:58:22.207980   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0
	I1204 23:58:22.207995   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.211 192.168.39.254]
	I1204 23:58:22.288746   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 ...
	I1204 23:58:22.288772   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0: {Name:mk226e78394d9a1d1481fe02645dfa0bdae47c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.288924   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0 ...
	I1204 23:58:22.288937   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0: {Name:mk254430c45779d3847a67c61786cc3c2c6ae80f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.289003   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:58:22.289131   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:58:22.289249   36539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:58:22.289263   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:58:22.289275   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:58:22.289288   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:58:22.289301   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:58:22.289314   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:58:22.289328   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:58:22.289344   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:58:22.306093   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:58:22.306200   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:58:22.306239   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:58:22.306253   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:58:22.306298   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:58:22.306334   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:58:22.306370   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:58:22.306426   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:58:22.306471   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.306493   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.306511   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.306552   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:58:22.309680   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:22.310101   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:58:22.310130   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:22.310297   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:58:22.310547   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:58:22.310740   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:58:22.310907   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:58:22.386340   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 23:58:22.392285   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 23:58:22.405695   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 23:58:22.409772   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1204 23:58:22.423743   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 23:58:22.427677   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 23:58:22.437721   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 23:58:22.441623   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 23:58:22.452543   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 23:58:22.461102   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 23:58:22.471271   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 23:58:22.477234   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 23:58:22.491216   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:58:22.516244   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:58:22.540200   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:58:22.565503   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:58:22.589637   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1204 23:58:22.614321   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 23:58:22.637487   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:58:22.661061   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:58:22.683528   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:58:22.706445   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:58:22.729085   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:58:22.750946   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 23:58:22.766659   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1204 23:58:22.782041   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 23:58:22.797608   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 23:58:22.812913   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 23:58:22.828464   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 23:58:22.843593   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 23:58:22.859636   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:58:22.865420   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:58:22.876063   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.880415   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.880469   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.885958   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:58:22.896219   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:58:22.906770   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.910894   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.910951   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.916407   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:58:22.926735   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:58:22.937057   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.941137   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.941190   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.946692   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:58:22.959164   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:58:22.963452   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:58:22.963502   36539 kubeadm.go:934] updating node {m03 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1204 23:58:22.963582   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:58:22.963624   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:58:22.963666   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:58:22.979996   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:58:22.980050   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:58:22.980122   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:58:22.989754   36539 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 23:58:22.989828   36539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 23:58:22.999376   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 23:58:22.999414   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 23:58:22.999429   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:58:22.999442   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:58:22.999460   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 23:58:22.999481   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:58:22.999484   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:58:22.999564   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:58:23.016751   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 23:58:23.016792   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 23:58:23.016813   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:58:23.016873   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 23:58:23.016893   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 23:58:23.016908   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:58:23.044749   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 23:58:23.044784   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 23:58:23.845384   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 23:58:23.856608   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 23:58:23.873561   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:58:23.888925   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 23:58:23.904283   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:58:23.908166   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:23.921626   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:24.054957   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:24.071310   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:58:24.071774   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:24.071826   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:24.087392   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I1204 23:58:24.087908   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:24.088426   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:24.088445   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:24.088872   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:24.089072   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:58:24.089234   36539 start.go:317] joinCluster: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:58:24.089406   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 23:58:24.089426   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:58:24.092795   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:24.093319   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:58:24.093346   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:24.093418   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:58:24.093603   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:58:24.093787   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:58:24.093916   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:58:24.257067   36539 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:58:24.257133   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8wphde.b7r2hfejhmn4frsm --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443"
	I1204 23:58:46.569250   36539 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8wphde.b7r2hfejhmn4frsm --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443": (22.312080737s)
	I1204 23:58:46.569302   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 23:58:47.113692   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758-m03 minikube.k8s.io/updated_at=2024_12_04T23_58_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=false
	I1204 23:58:47.254473   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-168758-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 23:58:47.374583   36539 start.go:319] duration metric: took 23.285344776s to joinCluster
	I1204 23:58:47.374658   36539 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:58:47.375005   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:47.375915   36539 out.go:177] * Verifying Kubernetes components...
	I1204 23:58:47.376822   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:47.645069   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:47.700465   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:58:47.700774   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 23:58:47.700882   36539 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I1204 23:58:47.701229   36539 node_ready.go:35] waiting up to 6m0s for node "ha-168758-m03" to be "Ready" ...
	I1204 23:58:47.701326   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:47.701346   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:47.701361   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:47.701372   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:47.704946   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:48.202242   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:48.202269   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:48.202277   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:48.202281   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:48.205645   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:48.701516   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:48.701536   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:48.701543   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:48.701547   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:48.704919   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.202192   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:49.202215   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:49.202225   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:49.202231   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:49.205567   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.701894   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:49.701912   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:49.701920   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:49.701928   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:49.705415   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.706113   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:50.201921   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:50.201950   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:50.201958   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:50.201963   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:50.205192   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:50.701850   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:50.701878   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:50.701890   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:50.701897   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:50.705478   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:51.201426   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:51.201446   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:51.201457   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:51.201463   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:51.204423   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:51.702262   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:51.702285   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:51.702293   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:51.702297   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:51.706096   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:51.706649   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:52.201560   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:52.201578   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:52.201585   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:52.201590   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:52.204456   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:52.701709   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:52.701729   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:52.701737   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:52.701742   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:52.705302   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:53.201587   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:53.201627   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:53.201637   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:53.201642   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:53.206081   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:58:53.702110   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:53.702136   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:53.702146   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:53.702151   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:53.707592   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:58:53.708884   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:54.202266   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:54.202287   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:54.202299   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:54.202303   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:54.205388   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:54.702184   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:54.702205   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:54.702212   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:54.702217   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:54.705854   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:55.202090   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:55.202109   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:55.202117   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:55.202122   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:55.205278   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:55.702366   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:55.702386   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:55.702393   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:55.702398   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:55.705911   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:56.202018   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:56.202039   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:56.202048   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:56.202052   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:56.205205   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:56.205770   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:56.702046   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:56.702068   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:56.702077   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:56.702079   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:56.705883   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:57.201534   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:57.201553   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:57.201560   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:57.201565   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:57.204718   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:57.702166   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:57.702192   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:57.702207   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:57.702215   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:57.706100   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:58.202178   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:58.202197   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:58.202207   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:58.202213   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:58.205266   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:58.205989   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:58.702432   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:58.702453   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:58.702461   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:58.702465   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:58.706164   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:59.201677   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:59.201698   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:59.201706   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:59.201710   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:59.204712   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:59.702080   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:59.702106   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:59.702114   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:59.702118   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:59.705833   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:00.202377   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:00.202398   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:00.202405   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:00.202410   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:00.204828   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:00.701624   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:00.701644   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:00.701652   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:00.701655   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:00.704805   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:00.705308   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:01.201668   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:01.201694   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:01.201704   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:01.201710   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:01.205149   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:01.702232   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:01.702261   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:01.702269   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:01.702275   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:01.706624   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:02.201616   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:02.201640   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:02.201652   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:02.201658   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:02.213640   36539 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1204 23:59:02.701482   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:02.701500   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:02.701507   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:02.701511   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:02.707401   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:59:02.707932   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:03.202348   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:03.202370   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:03.202382   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:03.202386   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:03.206086   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:03.702128   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:03.702153   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:03.702167   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:03.702176   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:03.705764   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:04.202048   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:04.202071   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:04.202079   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:04.202083   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:04.205585   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:04.702350   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:04.702371   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:04.702379   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:04.702383   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:04.706391   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.201695   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:05.201717   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.201724   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.201728   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.206270   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:05.207294   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:05.701694   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:05.701716   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.701727   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.701733   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.705389   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.706154   36539 node_ready.go:49] node "ha-168758-m03" has status "Ready":"True"
	I1204 23:59:05.706172   36539 node_ready.go:38] duration metric: took 18.004918776s for node "ha-168758-m03" to be "Ready" ...
	I1204 23:59:05.706181   36539 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:05.706250   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:05.706260   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.706266   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.706270   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.713324   36539 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 23:59:05.720724   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.720806   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rdbhr
	I1204 23:59:05.720815   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.720829   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.720837   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.724822   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.725428   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.725443   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.725451   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.725455   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.728496   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.729387   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.729410   36539 pod_ready.go:82] duration metric: took 8.658373ms for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.729423   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.729491   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgfvm
	I1204 23:59:05.729504   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.729516   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.729523   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.732217   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.732771   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.732784   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.732791   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.732795   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.735517   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.736112   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.736127   36539 pod_ready.go:82] duration metric: took 6.697955ms for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.736135   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.736189   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758
	I1204 23:59:05.736199   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.736206   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.736209   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.739373   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.740248   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.740263   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.740269   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.740273   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.742492   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.742913   36539 pod_ready.go:93] pod "etcd-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.742932   36539 pod_ready.go:82] duration metric: took 6.791875ms for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.742940   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.742986   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m02
	I1204 23:59:05.742994   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.743000   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.743004   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.745506   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.746152   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:05.746167   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.746173   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.746177   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.748564   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.749064   36539 pod_ready.go:93] pod "etcd-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.749082   36539 pod_ready.go:82] duration metric: took 6.135218ms for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.749094   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.902481   36539 request.go:632] Waited for 153.318139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m03
	I1204 23:59:05.902555   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m03
	I1204 23:59:05.902560   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.902568   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.902576   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.906398   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.102389   36539 request.go:632] Waited for 195.399546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:06.102443   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:06.102448   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.102476   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.102487   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.105971   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.106400   36539 pod_ready.go:93] pod "etcd-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.106416   36539 pod_ready.go:82] duration metric: took 357.316023ms for pod "etcd-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.106432   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.302535   36539 request.go:632] Waited for 196.013383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:59:06.302597   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:59:06.302608   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.302620   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.302626   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.305529   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:06.502522   36539 request.go:632] Waited for 196.356899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:06.502576   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:06.502581   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.502590   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.502598   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.507875   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:59:06.508456   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.508479   36539 pod_ready.go:82] duration metric: took 402.037453ms for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.508493   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.702449   36539 request.go:632] Waited for 193.867804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:59:06.702503   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:59:06.702509   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.702516   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.702521   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.706330   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.902289   36539 request.go:632] Waited for 195.373137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:06.902347   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:06.902355   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.902371   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.902380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.905544   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.906052   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.906069   36539 pod_ready.go:82] duration metric: took 397.563799ms for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.906080   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.102210   36539 request.go:632] Waited for 196.065026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m03
	I1204 23:59:07.102278   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m03
	I1204 23:59:07.102285   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.102298   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.102311   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.105396   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.302017   36539 request.go:632] Waited for 195.70193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:07.302103   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:07.302115   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.302147   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.302157   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.307086   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:07.307589   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:07.307617   36539 pod_ready.go:82] duration metric: took 401.528153ms for pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.307631   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.502581   36539 request.go:632] Waited for 194.865138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:59:07.502634   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:59:07.502639   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.502646   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.502652   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.506125   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.702202   36539 request.go:632] Waited for 195.372404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:07.702281   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:07.702288   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.702295   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.702302   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.705978   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.706658   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:07.706678   36539 pod_ready.go:82] duration metric: took 399.03822ms for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.706694   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.901924   36539 request.go:632] Waited for 195.154365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:59:07.902019   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:59:07.902029   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.902036   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.902039   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.905257   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.102494   36539 request.go:632] Waited for 196.34114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:08.102561   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:08.102566   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.102574   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.102581   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.106643   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:08.107401   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.107420   36539 pod_ready.go:82] duration metric: took 400.717315ms for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.107434   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.302582   36539 request.go:632] Waited for 195.013676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m03
	I1204 23:59:08.302656   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m03
	I1204 23:59:08.302666   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.302679   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.302688   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.308848   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:08.501837   36539 request.go:632] Waited for 192.289915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.501891   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.501918   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.501932   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.501962   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.504980   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:08.505549   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.505567   36539 pod_ready.go:82] duration metric: took 398.125286ms for pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.505577   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9hxpd" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.702722   36539 request.go:632] Waited for 197.087803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hxpd
	I1204 23:59:08.702788   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hxpd
	I1204 23:59:08.702797   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.702805   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.702809   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.706146   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.902503   36539 request.go:632] Waited for 195.802416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.902574   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.902581   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.902588   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.902593   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.905815   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.906362   36539 pod_ready.go:93] pod "kube-proxy-9hxpd" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.906385   36539 pod_ready.go:82] duration metric: took 400.800234ms for pod "kube-proxy-9hxpd" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.906398   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.101673   36539 request.go:632] Waited for 195.212044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:59:09.101755   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:59:09.101767   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.101780   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.101787   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.105084   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.302204   36539 request.go:632] Waited for 196.371294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:09.302277   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:09.302283   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.302290   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.302295   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.305317   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.306001   36539 pod_ready.go:93] pod "kube-proxy-s57v4" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:09.306020   36539 pod_ready.go:82] duration metric: took 399.614816ms for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.306032   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.502424   36539 request.go:632] Waited for 196.313347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:59:09.502475   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:59:09.502483   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.502494   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.502501   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.505899   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.702597   36539 request.go:632] Waited for 196.046362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:09.702661   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:09.702667   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.702675   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.702679   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.706005   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.706501   36539 pod_ready.go:93] pod "kube-proxy-vjhjm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:09.706518   36539 pod_ready.go:82] duration metric: took 400.479582ms for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.706528   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.902149   36539 request.go:632] Waited for 195.548343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:59:09.902203   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:59:09.902208   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.902215   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.902218   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.905375   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.102485   36539 request.go:632] Waited for 196.29891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:10.102549   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:10.102556   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.102567   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.102576   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.105620   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.106279   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.106297   36539 pod_ready.go:82] duration metric: took 399.763414ms for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.106307   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.302480   36539 request.go:632] Waited for 196.111079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:59:10.302561   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:59:10.302570   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.302581   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.302590   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.306007   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.501852   36539 request.go:632] Waited for 195.277553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:10.501917   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:10.501924   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.501957   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.501978   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.505309   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.505892   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.505913   36539 pod_ready.go:82] duration metric: took 399.598555ms for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.505927   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.702021   36539 request.go:632] Waited for 195.988022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m03
	I1204 23:59:10.702085   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m03
	I1204 23:59:10.702093   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.702104   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.702111   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.705373   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.902383   36539 request.go:632] Waited for 196.36776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:10.902455   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:10.902463   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.902473   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.902487   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.905908   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.906376   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.906394   36539 pod_ready.go:82] duration metric: took 400.458551ms for pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.906408   36539 pod_ready.go:39] duration metric: took 5.200215763s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:10.906427   36539 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:59:10.906489   36539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:59:10.925438   36539 api_server.go:72] duration metric: took 23.550742485s to wait for apiserver process to appear ...
	I1204 23:59:10.925468   36539 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:59:10.925490   36539 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1204 23:59:10.929840   36539 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1204 23:59:10.929894   36539 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1204 23:59:10.929901   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.929909   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.929916   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.930671   36539 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 23:59:10.930740   36539 api_server.go:141] control plane version: v1.31.2
	I1204 23:59:10.930758   36539 api_server.go:131] duration metric: took 5.282883ms to wait for apiserver health ...
	I1204 23:59:10.930771   36539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:59:11.102180   36539 request.go:632] Waited for 171.329772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.102245   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.102253   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.102264   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.102275   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.108643   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:11.116294   36539 system_pods.go:59] 24 kube-system pods found
	I1204 23:59:11.116322   36539 system_pods.go:61] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:59:11.116329   36539 system_pods.go:61] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:59:11.116335   36539 system_pods.go:61] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:59:11.116341   36539 system_pods.go:61] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:59:11.116346   36539 system_pods.go:61] "etcd-ha-168758-m03" [9912285e-4d34-4848-9cd7-ebfe1e2fc6b6] Running
	I1204 23:59:11.116350   36539 system_pods.go:61] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:59:11.116356   36539 system_pods.go:61] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:59:11.116361   36539 system_pods.go:61] "kindnet-qzjm2" [8cb0cd7b-1a4f-42f3-ad4b-2bb3f353b1ff] Running
	I1204 23:59:11.116367   36539 system_pods.go:61] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:59:11.116375   36539 system_pods.go:61] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:59:11.116381   36539 system_pods.go:61] "kube-apiserver-ha-168758-m03" [d38eb205-d9e9-422e-9f0a-92b5e27fcf06] Running
	I1204 23:59:11.116388   36539 system_pods.go:61] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:59:11.116399   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:59:11.116406   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m03" [6701fec1-6fd3-4ec0-a269-c55b53c7f17e] Running
	I1204 23:59:11.116415   36539 system_pods.go:61] "kube-proxy-9hxpd" [61053eb8-4fdb-48be-84a1-bb21e54ecfa5] Running
	I1204 23:59:11.116421   36539 system_pods.go:61] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:59:11.116429   36539 system_pods.go:61] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:59:11.116435   36539 system_pods.go:61] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:59:11.116444   36539 system_pods.go:61] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:59:11.116450   36539 system_pods.go:61] "kube-scheduler-ha-168758-m03" [3e543bde-2331-4e7d-8301-d9d2379d3670] Running
	I1204 23:59:11.116458   36539 system_pods.go:61] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:59:11.116464   36539 system_pods.go:61] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:59:11.116472   36539 system_pods.go:61] "kube-vip-ha-168758-m03" [2ca9e0a2-ef37-4420-b138-1093dd69e0d3] Running
	I1204 23:59:11.116480   36539 system_pods.go:61] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:59:11.116490   36539 system_pods.go:74] duration metric: took 185.709136ms to wait for pod list to return data ...
	I1204 23:59:11.116504   36539 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:59:11.302342   36539 request.go:632] Waited for 185.765456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:59:11.302393   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:59:11.302399   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.302405   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.302409   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.305795   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:11.305915   36539 default_sa.go:45] found service account: "default"
	I1204 23:59:11.305932   36539 default_sa.go:55] duration metric: took 189.416579ms for default service account to be created ...
	I1204 23:59:11.305962   36539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:59:11.502392   36539 request.go:632] Waited for 196.360375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.502441   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.502446   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.502454   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.502457   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.509349   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:11.515542   36539 system_pods.go:86] 24 kube-system pods found
	I1204 23:59:11.515567   36539 system_pods.go:89] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:59:11.515572   36539 system_pods.go:89] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:59:11.515576   36539 system_pods.go:89] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:59:11.515580   36539 system_pods.go:89] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:59:11.515584   36539 system_pods.go:89] "etcd-ha-168758-m03" [9912285e-4d34-4848-9cd7-ebfe1e2fc6b6] Running
	I1204 23:59:11.515587   36539 system_pods.go:89] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:59:11.515592   36539 system_pods.go:89] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:59:11.515595   36539 system_pods.go:89] "kindnet-qzjm2" [8cb0cd7b-1a4f-42f3-ad4b-2bb3f353b1ff] Running
	I1204 23:59:11.515599   36539 system_pods.go:89] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:59:11.515603   36539 system_pods.go:89] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:59:11.515607   36539 system_pods.go:89] "kube-apiserver-ha-168758-m03" [d38eb205-d9e9-422e-9f0a-92b5e27fcf06] Running
	I1204 23:59:11.515611   36539 system_pods.go:89] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:59:11.515615   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:59:11.515620   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m03" [6701fec1-6fd3-4ec0-a269-c55b53c7f17e] Running
	I1204 23:59:11.515624   36539 system_pods.go:89] "kube-proxy-9hxpd" [61053eb8-4fdb-48be-84a1-bb21e54ecfa5] Running
	I1204 23:59:11.515627   36539 system_pods.go:89] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:59:11.515630   36539 system_pods.go:89] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:59:11.515633   36539 system_pods.go:89] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:59:11.515636   36539 system_pods.go:89] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:59:11.515639   36539 system_pods.go:89] "kube-scheduler-ha-168758-m03" [3e543bde-2331-4e7d-8301-d9d2379d3670] Running
	I1204 23:59:11.515642   36539 system_pods.go:89] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:59:11.515645   36539 system_pods.go:89] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:59:11.515648   36539 system_pods.go:89] "kube-vip-ha-168758-m03" [2ca9e0a2-ef37-4420-b138-1093dd69e0d3] Running
	I1204 23:59:11.515654   36539 system_pods.go:89] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:59:11.515660   36539 system_pods.go:126] duration metric: took 209.689107ms to wait for k8s-apps to be running ...
	I1204 23:59:11.515669   36539 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:59:11.515708   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:59:11.532158   36539 system_svc.go:56] duration metric: took 16.479991ms WaitForService to wait for kubelet
	I1204 23:59:11.532192   36539 kubeadm.go:582] duration metric: took 24.157502148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:59:11.532215   36539 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:59:11.702618   36539 request.go:632] Waited for 170.317658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1204 23:59:11.702682   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1204 23:59:11.702690   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.702701   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.702708   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.706495   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:11.707279   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707300   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707313   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707318   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707323   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707329   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707335   36539 node_conditions.go:105] duration metric: took 175.114412ms to run NodePressure ...
	I1204 23:59:11.707353   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:59:11.707385   36539 start.go:255] writing updated cluster config ...
	I1204 23:59:11.707662   36539 ssh_runner.go:195] Run: rm -f paused
	I1204 23:59:11.758613   36539 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 23:59:11.760643   36539 out.go:177] * Done! kubectl is now configured to use "ha-168758" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.817904052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356972817877654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd005593-f364-4c82-8733-176a0ae08976 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.818474768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af48363d-b298-490f-9dc5-c945b0058d96 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.818545240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af48363d-b298-490f-9dc5-c945b0058d96 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.818899794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af48363d-b298-490f-9dc5-c945b0058d96 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.856915510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca4ae7d5-b550-4bae-bd30-969517fe45b8 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.857069645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca4ae7d5-b550-4bae-bd30-969517fe45b8 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.858293786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b1488d0-24c6-4971-b6a3-b35ddff6cdee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.858724641Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356972858704889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b1488d0-24c6-4971-b6a3-b35ddff6cdee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.859376045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8023043a-0653-47ed-806d-ca48b634ee17 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.859430796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8023043a-0653-47ed-806d-ca48b634ee17 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.859697657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8023043a-0653-47ed-806d-ca48b634ee17 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.896456351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3c7fcf7-1218-4cc7-bfe1-e8ed007bc61c name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.896542113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3c7fcf7-1218-4cc7-bfe1-e8ed007bc61c name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.898015405Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cffff151-8abc-4c40-981b-c206af6664c1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.900055714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356972900019338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cffff151-8abc-4c40-981b-c206af6664c1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.903026988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f0b4119-472b-43bf-b2b7-4134bdb285a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.903090514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f0b4119-472b-43bf-b2b7-4134bdb285a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.903381722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f0b4119-472b-43bf-b2b7-4134bdb285a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.941270173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6708b99e-020b-4f48-a959-6a020824f796 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.941356040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6708b99e-020b-4f48-a959-6a020824f796 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.942564302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be504931-c32b-4570-b204-91526455972c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.943050895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356972943027088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be504931-c32b-4570-b204-91526455972c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.943592114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a243763-e74b-4dff-a505-1651cc63611e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.943643739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a243763-e74b-4dff-a505-1651cc63611e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:52 ha-168758 crio[658]: time="2024-12-05 00:02:52.943863830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a243763-e74b-4dff-a505-1651cc63611e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db7ff0ab07801       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a10916ad1046       busybox-7dff88458-qnw8r
	acd6b28e94859       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   b695edf511c5e       coredns-7c65d6cfc9-zgfvm
	775689612e70e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   b16e3868f6d03       coredns-7c65d6cfc9-rdbhr
	ee81cf4f88af4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   bb748d0b54fa2       storage-provisioner
	4110b2797a4e4       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   f35f03db32634       kindnet-9l7jr
	54fe1d8daa049       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   29cc806b9c066       kube-proxy-vjhjm
	f9cdaa348b12d       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   399ac5ad91348       kube-vip-ha-168758
	f83a840ab3b70       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9e97775967d5d       kube-scheduler-ha-168758
	1729d6a102a14       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   7fa6e474a7667       kube-controller-manager-ha-168758
	9ba9268cc4c55       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c6a8751c9bc08       etcd-ha-168758
	413e537381433       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   05e1ede7541eb       kube-apiserver-ha-168758
	
	
	==> coredns [775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977] <==
	[INFO] 10.244.1.2:49235 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242119s
	[INFO] 10.244.1.2:36162 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00009811s
	[INFO] 10.244.1.2:39283 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118789s
	[INFO] 10.244.2.2:54111 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004220931s
	[INFO] 10.244.2.2:32931 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237164s
	[INFO] 10.244.2.2:41295 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142312s
	[INFO] 10.244.2.2:53943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008473674s
	[INFO] 10.244.2.2:46770 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202477s
	[INFO] 10.244.2.2:55371 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129803s
	[INFO] 10.244.0.4:54472 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128336s
	[INFO] 10.244.0.4:56711 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094635s
	[INFO] 10.244.1.2:59375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133764s
	[INFO] 10.244.1.2:35780 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111638s
	[INFO] 10.244.1.2:34975 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070467s
	[INFO] 10.244.2.2:45753 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154754s
	[INFO] 10.244.2.2:33451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215593s
	[INFO] 10.244.2.2:40356 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085623s
	[INFO] 10.244.0.4:46816 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012269s
	[INFO] 10.244.0.4:47330 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069513s
	[INFO] 10.244.0.4:57640 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114217s
	[INFO] 10.244.1.2:33801 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096362s
	[INFO] 10.244.2.2:37705 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000200759s
	[INFO] 10.244.2.2:42211 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000190547s
	[INFO] 10.244.0.4:33141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134356s
	[INFO] 10.244.0.4:43968 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006677s
	
	
	==> coredns [acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c] <==
	[INFO] 10.244.2.2:49728 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316528s
	[INFO] 10.244.0.4:43307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166964s
	[INFO] 10.244.0.4:56572 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002062823s
	[INFO] 10.244.0.4:59507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000465964s
	[INFO] 10.244.0.4:41018 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122656s
	[INFO] 10.244.0.4:58493 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154857s
	[INFO] 10.244.0.4:52626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075295s
	[INFO] 10.244.1.2:49458 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001666843s
	[INFO] 10.244.1.2:41288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102856s
	[INFO] 10.244.1.2:33580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001685081s
	[INFO] 10.244.1.2:42118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171665s
	[INFO] 10.244.1.2:59523 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108873s
	[INFO] 10.244.2.2:49031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000292612s
	[INFO] 10.244.0.4:56781 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089871s
	[INFO] 10.244.1.2:52724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218204s
	[INFO] 10.244.1.2:48399 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104187s
	[INFO] 10.244.1.2:60140 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087293s
	[INFO] 10.244.2.2:47493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233961s
	[INFO] 10.244.2.2:38060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000321274s
	[INFO] 10.244.0.4:49076 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086072s
	[INFO] 10.244.0.4:47481 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000284893s
	[INFO] 10.244.1.2:55588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252847s
	[INFO] 10.244.1.2:59106 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000257823s
	[INFO] 10.244.1.2:54163 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122038s
	[INFO] 10.244.1.2:39518 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016158s
	
	
	==> describe nodes <==
	Name:               ha-168758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_56_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-168758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cbaecf7e4184dafa60b26978754f7be
	  System UUID:                4cbaecf7-e418-4daf-a60b-26978754f7be
	  Boot ID:                    4bc94051-ae8a-40e7-9636-681faf9bf1fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qnw8r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-rdbhr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 coredns-7c65d6cfc9-zgfvm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 etcd-ha-168758                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-9l7jr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m16s
	  kube-system                 kube-apiserver-ha-168758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-168758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-vjhjm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-168758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-vip-ha-168758                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m14s  kube-proxy       
	  Normal  Starting                 6m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-168758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-168758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-168758 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	  Normal  NodeReady                6m     kubelet          Node ha-168758 status is now: NodeReady
	  Normal  RegisteredNode           5m16s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	  Normal  RegisteredNode           4m1s   node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	
	
	Name:               ha-168758-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_57_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:57:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:00:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-168758-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f10b54702f3c44dfba5cac04964ccd96
	  System UUID:                f10b5470-2f3c-44df-ba5c-ac04964ccd96
	  Boot ID:                    33cbfb56-c30f-45e3-b72b-7b95c35b9817
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vzls9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-168758-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m24s
	  kube-system                 kindnet-f4s96                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m25s
	  kube-system                 kube-apiserver-ha-168758-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-168758-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-s57v4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-scheduler-ha-168758-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-168758-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m25s (x2 over 5m25s)  kubelet          Node ha-168758-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x2 over 5m25s)  kubelet          Node ha-168758-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x2 over 5m25s)  kubelet          Node ha-168758-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  NodeReady                5m3s                   kubelet          Node ha-168758-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-168758-m02 status is now: NodeNotReady
	
	
	Name:               ha-168758-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_58_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:58:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:59:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    ha-168758-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45e57e3f54964fb3a3aeed03051209b3
	  System UUID:                45e57e3f-5496-4fb3-a3ae-ed03051209b3
	  Boot ID:                    8e17ecfe-4d6f-4380-b2ec-d58e3528bee5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-brzwz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-168758-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m8s
	  kube-system                 kindnet-qzjm2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m10s
	  kube-system                 kube-apiserver-ha-168758-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-ha-168758-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-9hxpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-scheduler-ha-168758-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-vip-ha-168758-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-168758-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-168758-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-168758-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	
	
	Name:               ha-168758-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_59_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:59:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 00:00:17 +0000   Thu, 05 Dec 2024 00:00:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ha-168758-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 52df6477abaa40eb8464d395e2a11a95
	  System UUID:                52df6477-abaa-40eb-8464-d395e2a11a95
	  Boot ID:                    af84ed2d-18aa-4219-8f2e-d7ea7a1e76fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-964c5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-6r2gb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m7s)  kubelet          Node ha-168758-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m7s)  kubelet          Node ha-168758-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m7s)  kubelet          Node ha-168758-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-168758-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 4 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037313] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 4 23:56] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.965232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.552645] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.572491] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.058309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061191] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.184233] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.102774] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.251131] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +3.779101] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.425619] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.056248] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.972412] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.078582] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.128288] kauditd_printk_skb: 33 callbacks suppressed
	[ +14.702984] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 4 23:57] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8] <==
	{"level":"warn","ts":"2024-12-05T00:02:52.992568Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8efbd523922c356","error":"Get \"https://192.168.39.70:2380/version\": dial tcp 192.168.39.70:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-12-05T00:02:53.204695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.209836Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.211235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.213915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.226591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.232632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.239668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.243813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.246782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.247385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.252543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.258529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.264109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.267774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.272136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.279802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.284824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.291397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.294604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.297643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.318505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.324628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.329858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:53.347647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:02:53 up 6 min,  0 users,  load average: 0.18, 0.34, 0.19
	Linux ha-168758 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302] <==
	I1205 00:02:22.983493       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:32.975709       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:32.975834       1 main.go:301] handling current node
	I1205 00:02:32.975867       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:32.975885       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:32.976196       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:32.976232       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:32.976329       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:32.976349       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:42.976549       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:42.977185       1 main.go:301] handling current node
	I1205 00:02:42.977331       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:42.977368       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:42.977798       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:42.977846       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:42.978121       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:42.978436       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:52.983049       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:52.983086       1 main.go:301] handling current node
	I1205 00:02:52.983099       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:52.983104       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:52.983347       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:52.983354       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:52.983479       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:52.983484       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619] <==
	W1204 23:56:31.867915       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I1204 23:56:31.869149       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:56:31.873256       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 23:56:32.153052       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 23:56:33.976231       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 23:56:33.989474       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 23:56:34.001309       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 23:56:37.651703       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1204 23:56:37.764778       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1204 23:59:17.616400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33356: use of closed network connection
	E1204 23:59:17.796618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33360: use of closed network connection
	E1204 23:59:17.975175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33372: use of closed network connection
	E1204 23:59:18.159453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33388: use of closed network connection
	E1204 23:59:18.340465       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33408: use of closed network connection
	E1204 23:59:18.525899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33424: use of closed network connection
	E1204 23:59:18.699204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33448: use of closed network connection
	E1204 23:59:18.871260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33456: use of closed network connection
	E1204 23:59:19.044471       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33468: use of closed network connection
	E1204 23:59:19.324768       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33494: use of closed network connection
	E1204 23:59:19.503485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33512: use of closed network connection
	E1204 23:59:19.681162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33532: use of closed network connection
	E1204 23:59:19.864655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33548: use of closed network connection
	E1204 23:59:20.038197       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33568: use of closed network connection
	E1204 23:59:20.209773       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33584: use of closed network connection
	W1205 00:00:41.898434       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.211 192.168.39.246]
	
	
	==> kube-controller-manager [1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3] <==
	E1204 23:59:46.495910       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2xb9f failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2xb9f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1204 23:59:46.913622       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-168758-m04\" does not exist"
	I1204 23:59:46.947226       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-168758-m04" podCIDRs=["10.244.3.0/24"]
	I1204 23:59:46.947330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:46.947390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:46.951979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.179648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.195594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.195618       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-168758-m04"
	I1204 23:59:47.217355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.297136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.621862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:57.226529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:06.702516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:06.703393       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-168758-m04"
	I1205 00:00:06.718143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:07.112438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:17.922402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:01:07.143195       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-168758-m04"
	I1205 00:01:07.144088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.170589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.248294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.276459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.988319ms"
	I1205 00:01:07.277110       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.518µs"
	I1205 00:01:12.432512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	
	
	==> kube-proxy [54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:56:38.767631       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:56:38.783503       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	E1204 23:56:38.783650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:56:38.826822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:56:38.826910       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:56:38.827013       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:56:38.829514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:56:38.830207       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:56:38.830274       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:56:38.832929       1 config.go:199] "Starting service config controller"
	I1204 23:56:38.833423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:56:38.833660       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:56:38.833695       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:56:38.836133       1 config.go:328] "Starting node config controller"
	I1204 23:56:38.836155       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:56:38.934569       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:56:38.934627       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:56:38.936209       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579] <==
	W1204 23:56:30.307297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:30.307486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:30.307308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:30.307579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.211435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 23:56:31.211491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.247731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:56:31.247775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.414104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:31.414215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.426986       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:56:31.427111       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 23:56:31.430415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:31.430490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.446966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:56:31.447114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.462015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:56:31.462105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.482639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 23:56:31.482672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1204 23:56:33.294482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1204 23:58:43.853823       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qzjm2\": pod kindnet-qzjm2 is already assigned to node \"ha-168758-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qzjm2" node="ha-168758-m03"
	E1204 23:58:43.856189       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qzjm2\": pod kindnet-qzjm2 is already assigned to node \"ha-168758-m03\"" pod="kube-system/kindnet-qzjm2"
	E1204 23:59:46.998972       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6r2gb\": pod kube-proxy-6r2gb is already assigned to node \"ha-168758-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6r2gb" node="ha-168758-m04"
	E1204 23:59:46.999075       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6r2gb\": pod kube-proxy-6r2gb is already assigned to node \"ha-168758-m04\"" pod="kube-system/kube-proxy-6r2gb"
	
	
	==> kubelet <==
	Dec 05 00:01:33 ha-168758 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 00:01:33 ha-168758 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 00:01:33 ha-168758 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 00:01:33 ha-168758 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 00:01:34 ha-168758 kubelet[1298]: E1205 00:01:34.017719    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356894017416921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:34 ha-168758 kubelet[1298]: E1205 00:01:34.017769    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356894017416921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:44 ha-168758 kubelet[1298]: E1205 00:01:44.019770    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356904019230886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:44 ha-168758 kubelet[1298]: E1205 00:01:44.020092    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356904019230886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:54 ha-168758 kubelet[1298]: E1205 00:01:54.022125    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356914021750332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:54 ha-168758 kubelet[1298]: E1205 00:01:54.022148    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356914021750332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:04 ha-168758 kubelet[1298]: E1205 00:02:04.026278    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356924025030004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:04 ha-168758 kubelet[1298]: E1205 00:02:04.026320    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356924025030004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:14 ha-168758 kubelet[1298]: E1205 00:02:14.029261    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356934028834031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:14 ha-168758 kubelet[1298]: E1205 00:02:14.029299    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356934028834031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:24 ha-168758 kubelet[1298]: E1205 00:02:24.032005    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356944031328211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:24 ha-168758 kubelet[1298]: E1205 00:02:24.032327    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356944031328211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:33 ha-168758 kubelet[1298]: E1205 00:02:33.944974    1298 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 00:02:33 ha-168758 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 00:02:34 ha-168758 kubelet[1298]: E1205 00:02:34.034516    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356954034156458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:34 ha-168758 kubelet[1298]: E1205 00:02:34.034558    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356954034156458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:44 ha-168758 kubelet[1298]: E1205 00:02:44.037075    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356964036447971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:44 ha-168758 kubelet[1298]: E1205 00:02:44.037120    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356964036447971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-168758 -n ha-168758
helpers_test.go:261: (dbg) Run:  kubectl --context ha-168758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr: (4.134512435s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-168758 -n ha-168758
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 logs -n 25: (1.314850775s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m03_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m04 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp testdata/cp-test.txt                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m04_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03:/home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m03 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-168758 node stop m02 -v=7                                                     | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-168758 node start m02 -v=7                                                    | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:55:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:55:54.691284   36539 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:55:54.691414   36539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:54.691424   36539 out.go:358] Setting ErrFile to fd 2...
	I1204 23:55:54.691432   36539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:54.691628   36539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1204 23:55:54.692240   36539 out.go:352] Setting JSON to false
	I1204 23:55:54.693153   36539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5901,"bootTime":1733350654,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:55:54.693247   36539 start.go:139] virtualization: kvm guest
	I1204 23:55:54.695552   36539 out.go:177] * [ha-168758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:55:54.696955   36539 notify.go:220] Checking for updates...
	I1204 23:55:54.696986   36539 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:55:54.698203   36539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:55:54.699583   36539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:55:54.700898   36539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:54.702193   36539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:55:54.703640   36539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:55:54.705053   36539 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:55:54.739286   36539 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 23:55:54.740386   36539 start.go:297] selected driver: kvm2
	I1204 23:55:54.740398   36539 start.go:901] validating driver "kvm2" against <nil>
	I1204 23:55:54.740409   36539 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:55:54.741134   36539 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:55:54.741212   36539 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 23:55:54.755884   36539 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 23:55:54.755947   36539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:55:54.756215   36539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:55:54.756246   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:55:54.756288   36539 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 23:55:54.756296   36539 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:55:54.756346   36539 start.go:340] cluster config:
	{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1204 23:55:54.756434   36539 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:55:54.758105   36539 out.go:177] * Starting "ha-168758" primary control-plane node in "ha-168758" cluster
	I1204 23:55:54.759313   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:55:54.759357   36539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:55:54.759363   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:55:54.759483   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:55:54.759495   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:55:54.759812   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:55:54.759833   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json: {Name:mk6e7aaeeb71d483ab659741dd4a36c7ed0cf735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:55:54.759956   36539 start.go:360] acquireMachinesLock for ha-168758: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:55:54.759986   36539 start.go:364] duration metric: took 15.99µs to acquireMachinesLock for "ha-168758"
	I1204 23:55:54.759999   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:55:54.760065   36539 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 23:55:54.761682   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:55:54.761785   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:55:54.761819   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:55:54.775974   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I1204 23:55:54.776416   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:55:54.776960   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:55:54.776978   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:55:54.777327   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:55:54.777500   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:55:54.777669   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:55:54.777814   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:55:54.777843   36539 client.go:168] LocalClient.Create starting
	I1204 23:55:54.777875   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:55:54.777915   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:55:54.777928   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:55:54.778006   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:55:54.778033   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:55:54.778049   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:55:54.778066   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:55:54.778079   36539 main.go:141] libmachine: (ha-168758) Calling .PreCreateCheck
	I1204 23:55:54.778456   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:55:54.778813   36539 main.go:141] libmachine: Creating machine...
	I1204 23:55:54.778825   36539 main.go:141] libmachine: (ha-168758) Calling .Create
	I1204 23:55:54.778947   36539 main.go:141] libmachine: (ha-168758) Creating KVM machine...
	I1204 23:55:54.780063   36539 main.go:141] libmachine: (ha-168758) DBG | found existing default KVM network
	I1204 23:55:54.780869   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:54.780727   36562 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1204 23:55:54.780949   36539 main.go:141] libmachine: (ha-168758) DBG | created network xml: 
	I1204 23:55:54.780981   36539 main.go:141] libmachine: (ha-168758) DBG | <network>
	I1204 23:55:54.780994   36539 main.go:141] libmachine: (ha-168758) DBG |   <name>mk-ha-168758</name>
	I1204 23:55:54.781002   36539 main.go:141] libmachine: (ha-168758) DBG |   <dns enable='no'/>
	I1204 23:55:54.781007   36539 main.go:141] libmachine: (ha-168758) DBG |   
	I1204 23:55:54.781012   36539 main.go:141] libmachine: (ha-168758) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 23:55:54.781017   36539 main.go:141] libmachine: (ha-168758) DBG |     <dhcp>
	I1204 23:55:54.781022   36539 main.go:141] libmachine: (ha-168758) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 23:55:54.781030   36539 main.go:141] libmachine: (ha-168758) DBG |     </dhcp>
	I1204 23:55:54.781067   36539 main.go:141] libmachine: (ha-168758) DBG |   </ip>
	I1204 23:55:54.781085   36539 main.go:141] libmachine: (ha-168758) DBG |   
	I1204 23:55:54.781094   36539 main.go:141] libmachine: (ha-168758) DBG | </network>
	I1204 23:55:54.781107   36539 main.go:141] libmachine: (ha-168758) DBG | 
	I1204 23:55:54.786047   36539 main.go:141] libmachine: (ha-168758) DBG | trying to create private KVM network mk-ha-168758 192.168.39.0/24...
	I1204 23:55:54.850461   36539 main.go:141] libmachine: (ha-168758) DBG | private KVM network mk-ha-168758 192.168.39.0/24 created
	I1204 23:55:54.850492   36539 main.go:141] libmachine: (ha-168758) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 ...
	I1204 23:55:54.850520   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:54.850437   36562 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:54.850537   36539 main.go:141] libmachine: (ha-168758) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:55:54.850557   36539 main.go:141] libmachine: (ha-168758) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:55:55.110330   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.110147   36562 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa...
	I1204 23:55:55.305236   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.305108   36562 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/ha-168758.rawdisk...
	I1204 23:55:55.305287   36539 main.go:141] libmachine: (ha-168758) DBG | Writing magic tar header
	I1204 23:55:55.305304   36539 main.go:141] libmachine: (ha-168758) DBG | Writing SSH key tar header
	I1204 23:55:55.305328   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.305248   36562 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 ...
	I1204 23:55:55.305350   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758
	I1204 23:55:55.305432   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 (perms=drwx------)
	I1204 23:55:55.305462   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:55:55.305478   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:55:55.305493   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:55:55.305502   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:55:55.305511   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:55:55.305522   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:55:55.305536   36539 main.go:141] libmachine: (ha-168758) Creating domain...
	I1204 23:55:55.305552   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:55.305567   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:55:55.305572   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:55:55.305581   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:55:55.305588   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home
	I1204 23:55:55.305594   36539 main.go:141] libmachine: (ha-168758) DBG | Skipping /home - not owner
	I1204 23:55:55.306542   36539 main.go:141] libmachine: (ha-168758) define libvirt domain using xml: 
	I1204 23:55:55.306560   36539 main.go:141] libmachine: (ha-168758) <domain type='kvm'>
	I1204 23:55:55.306569   36539 main.go:141] libmachine: (ha-168758)   <name>ha-168758</name>
	I1204 23:55:55.306577   36539 main.go:141] libmachine: (ha-168758)   <memory unit='MiB'>2200</memory>
	I1204 23:55:55.306585   36539 main.go:141] libmachine: (ha-168758)   <vcpu>2</vcpu>
	I1204 23:55:55.306593   36539 main.go:141] libmachine: (ha-168758)   <features>
	I1204 23:55:55.306598   36539 main.go:141] libmachine: (ha-168758)     <acpi/>
	I1204 23:55:55.306605   36539 main.go:141] libmachine: (ha-168758)     <apic/>
	I1204 23:55:55.306610   36539 main.go:141] libmachine: (ha-168758)     <pae/>
	I1204 23:55:55.306624   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.306651   36539 main.go:141] libmachine: (ha-168758)   </features>
	I1204 23:55:55.306675   36539 main.go:141] libmachine: (ha-168758)   <cpu mode='host-passthrough'>
	I1204 23:55:55.306682   36539 main.go:141] libmachine: (ha-168758)   
	I1204 23:55:55.306690   36539 main.go:141] libmachine: (ha-168758)   </cpu>
	I1204 23:55:55.306700   36539 main.go:141] libmachine: (ha-168758)   <os>
	I1204 23:55:55.306707   36539 main.go:141] libmachine: (ha-168758)     <type>hvm</type>
	I1204 23:55:55.306714   36539 main.go:141] libmachine: (ha-168758)     <boot dev='cdrom'/>
	I1204 23:55:55.306724   36539 main.go:141] libmachine: (ha-168758)     <boot dev='hd'/>
	I1204 23:55:55.306733   36539 main.go:141] libmachine: (ha-168758)     <bootmenu enable='no'/>
	I1204 23:55:55.306742   36539 main.go:141] libmachine: (ha-168758)   </os>
	I1204 23:55:55.306751   36539 main.go:141] libmachine: (ha-168758)   <devices>
	I1204 23:55:55.306763   36539 main.go:141] libmachine: (ha-168758)     <disk type='file' device='cdrom'>
	I1204 23:55:55.306773   36539 main.go:141] libmachine: (ha-168758)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/boot2docker.iso'/>
	I1204 23:55:55.306798   36539 main.go:141] libmachine: (ha-168758)       <target dev='hdc' bus='scsi'/>
	I1204 23:55:55.306817   36539 main.go:141] libmachine: (ha-168758)       <readonly/>
	I1204 23:55:55.306827   36539 main.go:141] libmachine: (ha-168758)     </disk>
	I1204 23:55:55.306836   36539 main.go:141] libmachine: (ha-168758)     <disk type='file' device='disk'>
	I1204 23:55:55.306848   36539 main.go:141] libmachine: (ha-168758)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:55:55.306863   36539 main.go:141] libmachine: (ha-168758)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/ha-168758.rawdisk'/>
	I1204 23:55:55.306877   36539 main.go:141] libmachine: (ha-168758)       <target dev='hda' bus='virtio'/>
	I1204 23:55:55.306888   36539 main.go:141] libmachine: (ha-168758)     </disk>
	I1204 23:55:55.306895   36539 main.go:141] libmachine: (ha-168758)     <interface type='network'>
	I1204 23:55:55.306907   36539 main.go:141] libmachine: (ha-168758)       <source network='mk-ha-168758'/>
	I1204 23:55:55.306917   36539 main.go:141] libmachine: (ha-168758)       <model type='virtio'/>
	I1204 23:55:55.306926   36539 main.go:141] libmachine: (ha-168758)     </interface>
	I1204 23:55:55.306935   36539 main.go:141] libmachine: (ha-168758)     <interface type='network'>
	I1204 23:55:55.306944   36539 main.go:141] libmachine: (ha-168758)       <source network='default'/>
	I1204 23:55:55.306958   36539 main.go:141] libmachine: (ha-168758)       <model type='virtio'/>
	I1204 23:55:55.306969   36539 main.go:141] libmachine: (ha-168758)     </interface>
	I1204 23:55:55.306980   36539 main.go:141] libmachine: (ha-168758)     <serial type='pty'>
	I1204 23:55:55.306986   36539 main.go:141] libmachine: (ha-168758)       <target port='0'/>
	I1204 23:55:55.306993   36539 main.go:141] libmachine: (ha-168758)     </serial>
	I1204 23:55:55.307001   36539 main.go:141] libmachine: (ha-168758)     <console type='pty'>
	I1204 23:55:55.307011   36539 main.go:141] libmachine: (ha-168758)       <target type='serial' port='0'/>
	I1204 23:55:55.307022   36539 main.go:141] libmachine: (ha-168758)     </console>
	I1204 23:55:55.307036   36539 main.go:141] libmachine: (ha-168758)     <rng model='virtio'>
	I1204 23:55:55.307049   36539 main.go:141] libmachine: (ha-168758)       <backend model='random'>/dev/random</backend>
	I1204 23:55:55.307058   36539 main.go:141] libmachine: (ha-168758)     </rng>
	I1204 23:55:55.307065   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.307074   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.307081   36539 main.go:141] libmachine: (ha-168758)   </devices>
	I1204 23:55:55.307091   36539 main.go:141] libmachine: (ha-168758) </domain>
	I1204 23:55:55.307100   36539 main.go:141] libmachine: (ha-168758) 
	I1204 23:55:55.311286   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:cd:a8:a7 in network default
	I1204 23:55:55.311801   36539 main.go:141] libmachine: (ha-168758) Ensuring networks are active...
	I1204 23:55:55.311821   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:55.312494   36539 main.go:141] libmachine: (ha-168758) Ensuring network default is active
	I1204 23:55:55.312741   36539 main.go:141] libmachine: (ha-168758) Ensuring network mk-ha-168758 is active
	I1204 23:55:55.313181   36539 main.go:141] libmachine: (ha-168758) Getting domain xml...
	I1204 23:55:55.313756   36539 main.go:141] libmachine: (ha-168758) Creating domain...
	I1204 23:55:56.499396   36539 main.go:141] libmachine: (ha-168758) Waiting to get IP...
	I1204 23:55:56.500076   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:56.500499   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:56.500544   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:56.500496   36562 retry.go:31] will retry after 269.735879ms: waiting for machine to come up
	I1204 23:55:56.772123   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:56.772694   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:56.772717   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:56.772654   36562 retry.go:31] will retry after 276.504012ms: waiting for machine to come up
	I1204 23:55:57.051105   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:57.051610   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:57.051639   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:57.051569   36562 retry.go:31] will retry after 401.653592ms: waiting for machine to come up
	I1204 23:55:57.455271   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:57.455618   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:57.455645   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:57.455573   36562 retry.go:31] will retry after 574.501264ms: waiting for machine to come up
	I1204 23:55:58.031181   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:58.031549   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:58.031571   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:58.031508   36562 retry.go:31] will retry after 481.845874ms: waiting for machine to come up
	I1204 23:55:58.515212   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:58.515533   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:58.515555   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:58.515482   36562 retry.go:31] will retry after 665.524951ms: waiting for machine to come up
	I1204 23:55:59.182393   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:59.182797   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:59.182828   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:59.182761   36562 retry.go:31] will retry after 904.131177ms: waiting for machine to come up
	I1204 23:56:00.088207   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:00.088652   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:00.088679   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:00.088622   36562 retry.go:31] will retry after 1.17453888s: waiting for machine to come up
	I1204 23:56:01.265172   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:01.265600   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:01.265628   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:01.265549   36562 retry.go:31] will retry after 1.695149003s: waiting for machine to come up
	I1204 23:56:02.963009   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:02.963469   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:02.963496   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:02.963425   36562 retry.go:31] will retry after 1.502018093s: waiting for machine to come up
	I1204 23:56:04.467087   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:04.467517   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:04.467540   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:04.467461   36562 retry.go:31] will retry after 2.001332834s: waiting for machine to come up
	I1204 23:56:06.470836   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:06.471294   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:06.471318   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:06.471260   36562 retry.go:31] will retry after 3.491074143s: waiting for machine to come up
	I1204 23:56:09.963576   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:09.964086   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:09.964117   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:09.964017   36562 retry.go:31] will retry after 3.362007605s: waiting for machine to come up
	I1204 23:56:13.329557   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:13.329894   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:13.329919   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:13.329871   36562 retry.go:31] will retry after 3.508639528s: waiting for machine to come up
	I1204 23:56:16.841192   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.841632   36539 main.go:141] libmachine: (ha-168758) Found IP for machine: 192.168.39.246
	I1204 23:56:16.841646   36539 main.go:141] libmachine: (ha-168758) Reserving static IP address...
	I1204 23:56:16.841659   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has current primary IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.842065   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find host DHCP lease matching {name: "ha-168758", mac: "52:54:00:e5:a0:85", ip: "192.168.39.246"} in network mk-ha-168758
	I1204 23:56:16.911192   36539 main.go:141] libmachine: (ha-168758) DBG | Getting to WaitForSSH function...
	I1204 23:56:16.911218   36539 main.go:141] libmachine: (ha-168758) Reserved static IP address: 192.168.39.246
	I1204 23:56:16.911227   36539 main.go:141] libmachine: (ha-168758) Waiting for SSH to be available...
	I1204 23:56:16.913359   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.913692   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:16.913721   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.913806   36539 main.go:141] libmachine: (ha-168758) DBG | Using SSH client type: external
	I1204 23:56:16.913831   36539 main.go:141] libmachine: (ha-168758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa (-rw-------)
	I1204 23:56:16.913874   36539 main.go:141] libmachine: (ha-168758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:56:16.913895   36539 main.go:141] libmachine: (ha-168758) DBG | About to run SSH command:
	I1204 23:56:16.913909   36539 main.go:141] libmachine: (ha-168758) DBG | exit 0
	I1204 23:56:17.042024   36539 main.go:141] libmachine: (ha-168758) DBG | SSH cmd err, output: <nil>: 
	I1204 23:56:17.042323   36539 main.go:141] libmachine: (ha-168758) KVM machine creation complete!
	I1204 23:56:17.042711   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:56:17.043239   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:17.043439   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:17.043673   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:56:17.043690   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:17.045108   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:56:17.045123   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:56:17.045129   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:56:17.045137   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.047586   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.047979   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.048020   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.048171   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.048340   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.048499   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.048610   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.048758   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.048940   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.048952   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:56:17.161079   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:56:17.161103   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:56:17.161111   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.163996   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.164353   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.164381   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.164502   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.164678   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.164819   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.164953   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.165084   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.165251   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.165266   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:56:17.278333   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:56:17.278413   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:56:17.278424   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:56:17.278431   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.278638   36539 buildroot.go:166] provisioning hostname "ha-168758"
	I1204 23:56:17.278665   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.278846   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.281326   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.281670   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.281694   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.281868   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.282066   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.282192   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.282295   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.282427   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.282619   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.282636   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758 && echo "ha-168758" | sudo tee /etc/hostname
	I1204 23:56:17.407414   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758
	
	I1204 23:56:17.407442   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.410140   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.410515   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.410537   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.410696   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.410903   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.411049   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.411179   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.411328   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.411496   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.411510   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:56:17.529969   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:56:17.530000   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:56:17.530033   36539 buildroot.go:174] setting up certificates
	I1204 23:56:17.530051   36539 provision.go:84] configureAuth start
	I1204 23:56:17.530060   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.530320   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:17.533080   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.533538   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.533566   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.533710   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.535938   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.536275   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.536304   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.536461   36539 provision.go:143] copyHostCerts
	I1204 23:56:17.536485   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:56:17.536522   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:56:17.536530   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:56:17.536603   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:56:17.536688   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:56:17.536713   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:56:17.536717   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:56:17.536740   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:56:17.536823   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:56:17.536843   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:56:17.536847   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:56:17.536869   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:56:17.536930   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758 san=[127.0.0.1 192.168.39.246 ha-168758 localhost minikube]
	I1204 23:56:17.711262   36539 provision.go:177] copyRemoteCerts
	I1204 23:56:17.711322   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:56:17.711342   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.714054   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.714497   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.714526   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.714715   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.714918   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.715121   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.715311   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:17.799635   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:56:17.799713   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:56:17.821716   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:56:17.821774   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 23:56:17.843183   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:56:17.843274   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:56:17.864534   36539 provision.go:87] duration metric: took 334.473115ms to configureAuth
	I1204 23:56:17.864557   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:56:17.864710   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:17.864770   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.867187   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.867580   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.867600   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.867756   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.867942   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.868106   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.868250   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.868399   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.868552   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.868568   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:56:18.094883   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:56:18.094906   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:56:18.094924   36539 main.go:141] libmachine: (ha-168758) Calling .GetURL
	I1204 23:56:18.096038   36539 main.go:141] libmachine: (ha-168758) DBG | Using libvirt version 6000000
	I1204 23:56:18.097970   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.098244   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.098281   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.098435   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:56:18.098456   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:56:18.098466   36539 client.go:171] duration metric: took 23.320611985s to LocalClient.Create
	I1204 23:56:18.098496   36539 start.go:167] duration metric: took 23.320681504s to libmachine.API.Create "ha-168758"
	I1204 23:56:18.098512   36539 start.go:293] postStartSetup for "ha-168758" (driver="kvm2")
	I1204 23:56:18.098543   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:56:18.098564   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.098810   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:56:18.098838   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.100934   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.101269   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.101300   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.101384   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.101557   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.101728   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.101868   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.187794   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:56:18.191682   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:56:18.191702   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:56:18.191762   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:56:18.191831   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:56:18.191844   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:56:18.191928   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:56:18.200599   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:56:18.221903   36539 start.go:296] duration metric: took 123.376466ms for postStartSetup
	I1204 23:56:18.221953   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:56:18.222479   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:18.225322   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.225603   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.225629   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.225822   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:18.226039   36539 start.go:128] duration metric: took 23.465963847s to createHost
	I1204 23:56:18.226063   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.227948   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.228242   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.228279   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.228403   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.228611   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.228757   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.228867   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.229008   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:18.229167   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:18.229181   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:56:18.342221   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356578.318371600
	
	I1204 23:56:18.342254   36539 fix.go:216] guest clock: 1733356578.318371600
	I1204 23:56:18.342262   36539 fix.go:229] Guest: 2024-12-04 23:56:18.3183716 +0000 UTC Remote: 2024-12-04 23:56:18.226051425 +0000 UTC m=+23.570737200 (delta=92.320175ms)
	I1204 23:56:18.342279   36539 fix.go:200] guest clock delta is within tolerance: 92.320175ms
	I1204 23:56:18.342284   36539 start.go:83] releasing machines lock for "ha-168758", held for 23.582292191s
	I1204 23:56:18.342302   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.342572   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:18.345220   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.345537   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.345566   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.345727   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346174   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346341   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346431   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:56:18.346466   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.346494   36539 ssh_runner.go:195] Run: cat /version.json
	I1204 23:56:18.346512   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.348977   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349188   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349347   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.349382   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349542   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.349563   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349567   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.349682   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.349723   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.349810   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.349858   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.349927   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.349985   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.350095   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.458516   36539 ssh_runner.go:195] Run: systemctl --version
	I1204 23:56:18.464121   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:56:18.616560   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:56:18.621973   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:56:18.622057   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:56:18.637008   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:56:18.637037   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:56:18.637115   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:56:18.652474   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:56:18.665368   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:56:18.665440   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:56:18.678285   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:56:18.691433   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:56:18.809520   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:56:18.976690   36539 docker.go:233] disabling docker service ...
	I1204 23:56:18.976760   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:56:18.990927   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:56:19.002936   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:56:19.109515   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:56:19.215051   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:56:19.228643   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:56:19.245611   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:56:19.245668   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.255147   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:56:19.255206   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.264826   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.274125   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.283553   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:56:19.293134   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.302638   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.318465   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.327811   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:56:19.336218   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:56:19.336258   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:56:19.347830   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:56:19.356252   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:56:19.465000   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:56:19.556969   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:56:19.557036   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:56:19.561493   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:56:19.561547   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:56:19.564952   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:56:19.603625   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:56:19.603715   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:56:19.633677   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:56:19.660728   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:56:19.661971   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:19.664503   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:19.664856   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:19.664875   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:19.665065   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:56:19.668903   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:56:19.681192   36539 kubeadm.go:883] updating cluster {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:56:19.681288   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:56:19.681341   36539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:56:19.712021   36539 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 23:56:19.712076   36539 ssh_runner.go:195] Run: which lz4
	I1204 23:56:19.715771   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1204 23:56:19.715868   36539 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 23:56:19.719604   36539 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 23:56:19.719630   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 23:56:20.924241   36539 crio.go:462] duration metric: took 1.208402192s to copy over tarball
	I1204 23:56:20.924314   36539 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 23:56:22.898491   36539 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.974144098s)
	I1204 23:56:22.898524   36539 crio.go:469] duration metric: took 1.974249374s to extract the tarball
	I1204 23:56:22.898533   36539 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 23:56:22.934474   36539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:56:22.976510   36539 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:56:22.976531   36539 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:56:22.976539   36539 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.2 crio true true} ...
	I1204 23:56:22.976632   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:56:22.976690   36539 ssh_runner.go:195] Run: crio config
	I1204 23:56:23.020705   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:56:23.020725   36539 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 23:56:23.020733   36539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:56:23.020757   36539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-168758 NodeName:ha-168758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:56:23.020887   36539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-168758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:56:23.020918   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:56:23.020968   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:56:23.036384   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:56:23.036487   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:56:23.036538   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:56:23.045918   36539 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:56:23.045983   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 23:56:23.054649   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 23:56:23.069725   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:56:23.085152   36539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 23:56:23.100420   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1204 23:56:23.115477   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:56:23.118939   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:56:23.130202   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:56:23.249727   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:56:23.265905   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.246
	I1204 23:56:23.265923   36539 certs.go:194] generating shared ca certs ...
	I1204 23:56:23.265936   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.266102   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:56:23.266137   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:56:23.266147   36539 certs.go:256] generating profile certs ...
	I1204 23:56:23.266191   36539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:56:23.266210   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt with IP's: []
	I1204 23:56:23.454121   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt ...
	I1204 23:56:23.454147   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt: {Name:mk99d8a201f862390e96b084b532473bf8d8aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.454316   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key ...
	I1204 23:56:23.454328   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key: {Name:mkfdc708535eba27b71f019b2247ebd8ebc65dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.454403   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a
	I1204 23:56:23.454418   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I1204 23:56:23.571650   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a ...
	I1204 23:56:23.571677   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a: {Name:mk2304c3cee7d491c51de92409bc18bd5dd4ae6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.571829   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a ...
	I1204 23:56:23.571841   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a: {Name:mkd75785818ce3fa6f4d0786b8ae41ba1de41627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.571908   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:56:23.572007   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:56:23.572063   36539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:56:23.572077   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt with IP's: []
	I1204 23:56:23.673268   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt ...
	I1204 23:56:23.673293   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt: {Name:mkd46a8ffbcb119ca858529996fc89416032f07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.673434   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key ...
	I1204 23:56:23.673444   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key: {Name:mk7fc42a81c367eaf7af5377c2050677c25e0a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.673507   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:56:23.673523   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:56:23.673534   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:56:23.673547   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:56:23.673561   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:56:23.673574   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:56:23.673586   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:56:23.673598   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:56:23.673642   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:56:23.673674   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:56:23.673683   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:56:23.673707   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:56:23.673729   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:56:23.673751   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:56:23.673787   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:56:23.673812   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.673825   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.673837   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:23.674359   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:56:23.698874   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:56:23.720936   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:56:23.742507   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:56:23.763840   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 23:56:23.787615   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:56:23.809303   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:56:23.830746   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:56:23.852712   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:56:23.874053   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:56:23.895302   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:56:23.916675   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:56:23.931879   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:56:23.937416   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:56:23.947727   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.951899   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.951951   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.957401   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:56:23.967356   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:56:23.977327   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.981332   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.981387   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.986550   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:56:23.996547   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:56:24.006567   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.010520   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.010574   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.015659   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:56:24.028404   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:56:24.032403   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:56:24.032453   36539 kubeadm.go:392] StartCluster: {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:56:24.032536   36539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:56:24.032593   36539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:56:24.081213   36539 cri.go:89] found id: ""
	I1204 23:56:24.081287   36539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:56:24.095063   36539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:56:24.104106   36539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:56:24.113190   36539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:56:24.113206   36539 kubeadm.go:157] found existing configuration files:
	
	I1204 23:56:24.113242   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:56:24.121657   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:56:24.121703   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:56:24.130638   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:56:24.139019   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:56:24.139075   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:56:24.147703   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:56:24.156160   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:56:24.156205   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:56:24.165072   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:56:24.173473   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:56:24.173525   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:56:24.182301   36539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 23:56:24.376697   36539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:56:34.579933   36539 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:56:34.580018   36539 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:56:34.580122   36539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:56:34.580249   36539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:56:34.580403   36539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:56:34.580494   36539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:56:34.582178   36539 out.go:235]   - Generating certificates and keys ...
	I1204 23:56:34.582281   36539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:56:34.582355   36539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:56:34.582417   36539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:56:34.582467   36539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:56:34.582518   36539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:56:34.582564   36539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:56:34.582636   36539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:56:34.582772   36539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-168758 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1204 23:56:34.582829   36539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:56:34.582931   36539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-168758 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1204 23:56:34.582990   36539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:56:34.583041   36539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:56:34.583090   36539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:56:34.583139   36539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:56:34.583188   36539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:56:34.583239   36539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:56:34.583290   36539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:56:34.583344   36539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:56:34.583394   36539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:56:34.583463   36539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:56:34.583529   36539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:56:34.585082   36539 out.go:235]   - Booting up control plane ...
	I1204 23:56:34.585162   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:56:34.585225   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:56:34.585285   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:56:34.585417   36539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:56:34.585527   36539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:56:34.585583   36539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:56:34.585716   36539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:56:34.585831   36539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:56:34.585888   36539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.206497ms
	I1204 23:56:34.585961   36539 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:56:34.586024   36539 kubeadm.go:310] [api-check] The API server is healthy after 5.989146948s
	I1204 23:56:34.586115   36539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:56:34.586292   36539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:56:34.586380   36539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:56:34.586600   36539 kubeadm.go:310] [mark-control-plane] Marking the node ha-168758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:56:34.586682   36539 kubeadm.go:310] [bootstrap-token] Using token: xmluo7.j7yx8f6chysik4ve
	I1204 23:56:34.588072   36539 out.go:235]   - Configuring RBAC rules ...
	I1204 23:56:34.588170   36539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:56:34.588259   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:56:34.588422   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:56:34.588567   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:56:34.588682   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:56:34.588762   36539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:56:34.588863   36539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:56:34.588922   36539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:56:34.588962   36539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:56:34.588968   36539 kubeadm.go:310] 
	I1204 23:56:34.589020   36539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:56:34.589026   36539 kubeadm.go:310] 
	I1204 23:56:34.589098   36539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:56:34.589104   36539 kubeadm.go:310] 
	I1204 23:56:34.589125   36539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:56:34.589179   36539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:56:34.589225   36539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:56:34.589231   36539 kubeadm.go:310] 
	I1204 23:56:34.589295   36539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:56:34.589308   36539 kubeadm.go:310] 
	I1204 23:56:34.589352   36539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:56:34.589359   36539 kubeadm.go:310] 
	I1204 23:56:34.589401   36539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:56:34.589475   36539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:56:34.589570   36539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:56:34.589580   36539 kubeadm.go:310] 
	I1204 23:56:34.589656   36539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:56:34.589727   36539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:56:34.589735   36539 kubeadm.go:310] 
	I1204 23:56:34.589810   36539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xmluo7.j7yx8f6chysik4ve \
	I1204 23:56:34.589952   36539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c \
	I1204 23:56:34.589986   36539 kubeadm.go:310] 	--control-plane 
	I1204 23:56:34.589993   36539 kubeadm.go:310] 
	I1204 23:56:34.590071   36539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:56:34.590078   36539 kubeadm.go:310] 
	I1204 23:56:34.590151   36539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xmluo7.j7yx8f6chysik4ve \
	I1204 23:56:34.590255   36539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c 
	I1204 23:56:34.590265   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:56:34.590270   36539 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 23:56:34.591672   36539 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 23:56:34.592995   36539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 23:56:34.598320   36539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 23:56:34.598340   36539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 23:56:34.616335   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 23:56:34.967246   36539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:56:34.967343   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:34.967389   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758 minikube.k8s.io/updated_at=2024_12_04T23_56_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=true
	I1204 23:56:35.009341   36539 ops.go:34] apiserver oom_adj: -16
	I1204 23:56:35.191577   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:35.691603   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:36.192181   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:36.691747   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.192577   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.691663   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.837937   36539 kubeadm.go:1113] duration metric: took 2.870656691s to wait for elevateKubeSystemPrivileges
	I1204 23:56:37.837999   36539 kubeadm.go:394] duration metric: took 13.80554404s to StartCluster
	I1204 23:56:37.838021   36539 settings.go:142] acquiring lock: {Name:mkdafc0119272d33e86aa3cd16389b126b5518c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:37.838113   36539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:56:37.838945   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/kubeconfig: {Name:mkdcd2310e2ff57e3c85e12df202da266115cf1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:37.839186   36539 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:56:37.839215   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:56:37.839197   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:56:37.839216   36539 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 23:56:37.839321   36539 addons.go:69] Setting storage-provisioner=true in profile "ha-168758"
	I1204 23:56:37.839348   36539 addons.go:234] Setting addon storage-provisioner=true in "ha-168758"
	I1204 23:56:37.839351   36539 addons.go:69] Setting default-storageclass=true in profile "ha-168758"
	I1204 23:56:37.839375   36539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-168758"
	I1204 23:56:37.839380   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:37.839384   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:56:37.839759   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.839800   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.839824   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.839863   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.854550   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1204 23:56:37.854839   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1204 23:56:37.854964   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.855259   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.855493   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.855517   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.855736   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.855762   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.855853   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.855980   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.856068   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.856674   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.856721   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.858200   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:56:37.858527   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 23:56:37.859020   36539 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 23:56:37.859271   36539 addons.go:234] Setting addon default-storageclass=true in "ha-168758"
	I1204 23:56:37.859308   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:56:37.859664   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.859703   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.872166   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I1204 23:56:37.872824   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.873354   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.873383   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.873694   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.873898   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.873955   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
	I1204 23:56:37.874272   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.874700   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.874726   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.875097   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.875640   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:37.875644   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.875720   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.877494   36539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:56:37.878808   36539 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:56:37.878822   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:56:37.878836   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:37.882048   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.882510   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:37.882542   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.882617   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:37.882791   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:37.882930   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:37.883065   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:37.890703   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1204 23:56:37.891111   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.891600   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.891624   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.891905   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.892126   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.893659   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:37.893845   36539 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:56:37.893861   36539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:56:37.893876   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:37.896603   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.897013   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:37.897035   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.897191   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:37.897338   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:37.897461   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:37.897562   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:37.981170   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:56:38.100674   36539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:56:38.124529   36539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:56:38.478625   36539 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 23:56:38.478692   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.478710   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.478984   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.479004   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.479014   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.479022   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.479240   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.479255   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.479326   36539 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 23:56:38.479347   36539 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 23:56:38.479442   36539 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1204 23:56:38.479452   36539 round_trippers.go:469] Request Headers:
	I1204 23:56:38.479462   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:56:38.479469   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:56:38.494926   36539 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1204 23:56:38.495499   36539 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1204 23:56:38.495513   36539 round_trippers.go:469] Request Headers:
	I1204 23:56:38.495520   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:56:38.495529   36539 round_trippers.go:473]     Content-Type: application/json
	I1204 23:56:38.495532   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:56:38.504036   36539 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1204 23:56:38.504185   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.504196   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.504490   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.504510   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.504543   36539 main.go:141] libmachine: (ha-168758) DBG | Closing plugin on server side
	I1204 23:56:38.728533   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.728559   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.728849   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.728876   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.728885   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.728893   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.729129   36539 main.go:141] libmachine: (ha-168758) DBG | Closing plugin on server side
	I1204 23:56:38.729202   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.729219   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.730946   36539 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1204 23:56:38.732198   36539 addons.go:510] duration metric: took 892.982269ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1204 23:56:38.732240   36539 start.go:246] waiting for cluster config update ...
	I1204 23:56:38.732263   36539 start.go:255] writing updated cluster config ...
	I1204 23:56:38.733842   36539 out.go:201] 
	I1204 23:56:38.735235   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:38.735344   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:38.736922   36539 out.go:177] * Starting "ha-168758-m02" control-plane node in "ha-168758" cluster
	I1204 23:56:38.738175   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:56:38.738201   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:56:38.738327   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:56:38.738348   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:56:38.738442   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:38.738692   36539 start.go:360] acquireMachinesLock for ha-168758-m02: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:56:38.738753   36539 start.go:364] duration metric: took 32.863µs to acquireMachinesLock for "ha-168758-m02"
	I1204 23:56:38.738778   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:56:38.738869   36539 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1204 23:56:38.740536   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:56:38.740652   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:38.740698   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:38.755586   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I1204 23:56:38.756099   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:38.756624   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:38.756663   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:38.757003   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:38.757205   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:56:38.757343   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:56:38.757484   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:56:38.757510   36539 client.go:168] LocalClient.Create starting
	I1204 23:56:38.757536   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:56:38.757571   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:56:38.757584   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:56:38.757629   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:56:38.757661   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:56:38.757671   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:56:38.757684   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:56:38.757692   36539 main.go:141] libmachine: (ha-168758-m02) Calling .PreCreateCheck
	I1204 23:56:38.757862   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:56:38.758275   36539 main.go:141] libmachine: Creating machine...
	I1204 23:56:38.758289   36539 main.go:141] libmachine: (ha-168758-m02) Calling .Create
	I1204 23:56:38.758443   36539 main.go:141] libmachine: (ha-168758-m02) Creating KVM machine...
	I1204 23:56:38.759771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found existing default KVM network
	I1204 23:56:38.759905   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found existing private KVM network mk-ha-168758
	I1204 23:56:38.760010   36539 main.go:141] libmachine: (ha-168758-m02) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 ...
	I1204 23:56:38.760033   36539 main.go:141] libmachine: (ha-168758-m02) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:56:38.760144   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:38.760017   36906 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:56:38.760207   36539 main.go:141] libmachine: (ha-168758-m02) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:56:39.002966   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.002856   36906 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa...
	I1204 23:56:39.068744   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.068644   36906 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/ha-168758-m02.rawdisk...
	I1204 23:56:39.068771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Writing magic tar header
	I1204 23:56:39.068781   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Writing SSH key tar header
	I1204 23:56:39.068788   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.068754   36906 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 ...
	I1204 23:56:39.068880   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02
	I1204 23:56:39.068913   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 (perms=drwx------)
	I1204 23:56:39.068925   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:56:39.068946   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:56:39.068960   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:56:39.068973   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:56:39.068987   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:56:39.068997   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:56:39.069012   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:56:39.069021   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home
	I1204 23:56:39.069037   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Skipping /home - not owner
	I1204 23:56:39.069073   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:56:39.069102   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:56:39.069123   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:56:39.069137   36539 main.go:141] libmachine: (ha-168758-m02) Creating domain...
	I1204 23:56:39.070045   36539 main.go:141] libmachine: (ha-168758-m02) define libvirt domain using xml: 
	I1204 23:56:39.070076   36539 main.go:141] libmachine: (ha-168758-m02) <domain type='kvm'>
	I1204 23:56:39.070100   36539 main.go:141] libmachine: (ha-168758-m02)   <name>ha-168758-m02</name>
	I1204 23:56:39.070118   36539 main.go:141] libmachine: (ha-168758-m02)   <memory unit='MiB'>2200</memory>
	I1204 23:56:39.070139   36539 main.go:141] libmachine: (ha-168758-m02)   <vcpu>2</vcpu>
	I1204 23:56:39.070157   36539 main.go:141] libmachine: (ha-168758-m02)   <features>
	I1204 23:56:39.070172   36539 main.go:141] libmachine: (ha-168758-m02)     <acpi/>
	I1204 23:56:39.070188   36539 main.go:141] libmachine: (ha-168758-m02)     <apic/>
	I1204 23:56:39.070199   36539 main.go:141] libmachine: (ha-168758-m02)     <pae/>
	I1204 23:56:39.070210   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070233   36539 main.go:141] libmachine: (ha-168758-m02)   </features>
	I1204 23:56:39.070244   36539 main.go:141] libmachine: (ha-168758-m02)   <cpu mode='host-passthrough'>
	I1204 23:56:39.070251   36539 main.go:141] libmachine: (ha-168758-m02)   
	I1204 23:56:39.070266   36539 main.go:141] libmachine: (ha-168758-m02)   </cpu>
	I1204 23:56:39.070275   36539 main.go:141] libmachine: (ha-168758-m02)   <os>
	I1204 23:56:39.070282   36539 main.go:141] libmachine: (ha-168758-m02)     <type>hvm</type>
	I1204 23:56:39.070293   36539 main.go:141] libmachine: (ha-168758-m02)     <boot dev='cdrom'/>
	I1204 23:56:39.070303   36539 main.go:141] libmachine: (ha-168758-m02)     <boot dev='hd'/>
	I1204 23:56:39.070312   36539 main.go:141] libmachine: (ha-168758-m02)     <bootmenu enable='no'/>
	I1204 23:56:39.070322   36539 main.go:141] libmachine: (ha-168758-m02)   </os>
	I1204 23:56:39.070334   36539 main.go:141] libmachine: (ha-168758-m02)   <devices>
	I1204 23:56:39.070343   36539 main.go:141] libmachine: (ha-168758-m02)     <disk type='file' device='cdrom'>
	I1204 23:56:39.070361   36539 main.go:141] libmachine: (ha-168758-m02)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/boot2docker.iso'/>
	I1204 23:56:39.070369   36539 main.go:141] libmachine: (ha-168758-m02)       <target dev='hdc' bus='scsi'/>
	I1204 23:56:39.070376   36539 main.go:141] libmachine: (ha-168758-m02)       <readonly/>
	I1204 23:56:39.070387   36539 main.go:141] libmachine: (ha-168758-m02)     </disk>
	I1204 23:56:39.070396   36539 main.go:141] libmachine: (ha-168758-m02)     <disk type='file' device='disk'>
	I1204 23:56:39.070401   36539 main.go:141] libmachine: (ha-168758-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:56:39.070412   36539 main.go:141] libmachine: (ha-168758-m02)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/ha-168758-m02.rawdisk'/>
	I1204 23:56:39.070427   36539 main.go:141] libmachine: (ha-168758-m02)       <target dev='hda' bus='virtio'/>
	I1204 23:56:39.070435   36539 main.go:141] libmachine: (ha-168758-m02)     </disk>
	I1204 23:56:39.070439   36539 main.go:141] libmachine: (ha-168758-m02)     <interface type='network'>
	I1204 23:56:39.070447   36539 main.go:141] libmachine: (ha-168758-m02)       <source network='mk-ha-168758'/>
	I1204 23:56:39.070452   36539 main.go:141] libmachine: (ha-168758-m02)       <model type='virtio'/>
	I1204 23:56:39.070459   36539 main.go:141] libmachine: (ha-168758-m02)     </interface>
	I1204 23:56:39.070463   36539 main.go:141] libmachine: (ha-168758-m02)     <interface type='network'>
	I1204 23:56:39.070468   36539 main.go:141] libmachine: (ha-168758-m02)       <source network='default'/>
	I1204 23:56:39.070475   36539 main.go:141] libmachine: (ha-168758-m02)       <model type='virtio'/>
	I1204 23:56:39.070479   36539 main.go:141] libmachine: (ha-168758-m02)     </interface>
	I1204 23:56:39.070483   36539 main.go:141] libmachine: (ha-168758-m02)     <serial type='pty'>
	I1204 23:56:39.070500   36539 main.go:141] libmachine: (ha-168758-m02)       <target port='0'/>
	I1204 23:56:39.070515   36539 main.go:141] libmachine: (ha-168758-m02)     </serial>
	I1204 23:56:39.070526   36539 main.go:141] libmachine: (ha-168758-m02)     <console type='pty'>
	I1204 23:56:39.070533   36539 main.go:141] libmachine: (ha-168758-m02)       <target type='serial' port='0'/>
	I1204 23:56:39.070539   36539 main.go:141] libmachine: (ha-168758-m02)     </console>
	I1204 23:56:39.070547   36539 main.go:141] libmachine: (ha-168758-m02)     <rng model='virtio'>
	I1204 23:56:39.070561   36539 main.go:141] libmachine: (ha-168758-m02)       <backend model='random'>/dev/random</backend>
	I1204 23:56:39.070568   36539 main.go:141] libmachine: (ha-168758-m02)     </rng>
	I1204 23:56:39.070579   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070586   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070604   36539 main.go:141] libmachine: (ha-168758-m02)   </devices>
	I1204 23:56:39.070621   36539 main.go:141] libmachine: (ha-168758-m02) </domain>
	I1204 23:56:39.070635   36539 main.go:141] libmachine: (ha-168758-m02) 
	I1204 23:56:39.077113   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:a2:74:ea in network default
	I1204 23:56:39.077678   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring networks are active...
	I1204 23:56:39.077701   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:39.078445   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring network default is active
	I1204 23:56:39.078865   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring network mk-ha-168758 is active
	I1204 23:56:39.079237   36539 main.go:141] libmachine: (ha-168758-m02) Getting domain xml...
	I1204 23:56:39.079930   36539 main.go:141] libmachine: (ha-168758-m02) Creating domain...
	I1204 23:56:40.265017   36539 main.go:141] libmachine: (ha-168758-m02) Waiting to get IP...
	I1204 23:56:40.265754   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.266145   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.266182   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.266132   36906 retry.go:31] will retry after 272.630461ms: waiting for machine to come up
	I1204 23:56:40.540436   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.540778   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.540807   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.540731   36906 retry.go:31] will retry after 280.679773ms: waiting for machine to come up
	I1204 23:56:40.823193   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.823606   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.823628   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.823550   36906 retry.go:31] will retry after 428.494355ms: waiting for machine to come up
	I1204 23:56:41.253121   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:41.253634   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:41.253657   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:41.253584   36906 retry.go:31] will retry after 443.923324ms: waiting for machine to come up
	I1204 23:56:41.698819   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:41.699183   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:41.699213   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:41.699129   36906 retry.go:31] will retry after 522.039893ms: waiting for machine to come up
	I1204 23:56:42.222513   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:42.223045   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:42.223075   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:42.222991   36906 retry.go:31] will retry after 778.146913ms: waiting for machine to come up
	I1204 23:56:43.002368   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:43.002725   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:43.002748   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:43.002697   36906 retry.go:31] will retry after 985.520994ms: waiting for machine to come up
	I1204 23:56:43.990537   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:43.991046   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:43.991072   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:43.991013   36906 retry.go:31] will retry after 1.065554597s: waiting for machine to come up
	I1204 23:56:45.057549   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:45.057997   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:45.058038   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:45.057954   36906 retry.go:31] will retry after 1.75203222s: waiting for machine to come up
	I1204 23:56:46.812766   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:46.813189   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:46.813218   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:46.813128   36906 retry.go:31] will retry after 1.936174375s: waiting for machine to come up
	I1204 23:56:48.752161   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:48.752595   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:48.752615   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:48.752561   36906 retry.go:31] will retry after 2.006049391s: waiting for machine to come up
	I1204 23:56:50.760948   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:50.761423   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:50.761448   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:50.761364   36906 retry.go:31] will retry after 3.635507158s: waiting for machine to come up
	I1204 23:56:54.398466   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:54.398888   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:54.398919   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:54.398858   36906 retry.go:31] will retry after 3.280789577s: waiting for machine to come up
	I1204 23:56:57.683343   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:57.683888   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:57.683924   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:57.683859   36906 retry.go:31] will retry after 5.640187241s: waiting for machine to come up
	I1204 23:57:03.329060   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.329506   36539 main.go:141] libmachine: (ha-168758-m02) Found IP for machine: 192.168.39.70
	I1204 23:57:03.329530   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has current primary IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.329536   36539 main.go:141] libmachine: (ha-168758-m02) Reserving static IP address...
	I1204 23:57:03.329911   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find host DHCP lease matching {name: "ha-168758-m02", mac: "52:54:00:e2:30:30", ip: "192.168.39.70"} in network mk-ha-168758
	I1204 23:57:03.403859   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Getting to WaitForSSH function...
	I1204 23:57:03.403880   36539 main.go:141] libmachine: (ha-168758-m02) Reserved static IP address: 192.168.39.70
	I1204 23:57:03.403891   36539 main.go:141] libmachine: (ha-168758-m02) Waiting for SSH to be available...
	I1204 23:57:03.406504   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.407026   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.407061   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.407259   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using SSH client type: external
	I1204 23:57:03.407286   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa (-rw-------)
	I1204 23:57:03.407378   36539 main.go:141] libmachine: (ha-168758-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:57:03.407404   36539 main.go:141] libmachine: (ha-168758-m02) DBG | About to run SSH command:
	I1204 23:57:03.407413   36539 main.go:141] libmachine: (ha-168758-m02) DBG | exit 0
	I1204 23:57:03.534727   36539 main.go:141] libmachine: (ha-168758-m02) DBG | SSH cmd err, output: <nil>: 
	I1204 23:57:03.535017   36539 main.go:141] libmachine: (ha-168758-m02) KVM machine creation complete!
	I1204 23:57:03.535420   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:57:03.536018   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:03.536208   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:03.536363   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:57:03.536382   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetState
	I1204 23:57:03.537483   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:57:03.537500   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:57:03.537508   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:57:03.537516   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.539751   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.540081   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.540109   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.540281   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.540440   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.540587   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.540727   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.540880   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.541129   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.541145   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:57:03.649219   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:57:03.649249   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:57:03.649261   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.652199   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.652771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.652856   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.652885   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.653079   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.653247   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.653419   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.653613   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.653828   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.653842   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:57:03.763190   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:57:03.763346   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:57:03.763361   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:57:03.763371   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:03.763602   36539 buildroot.go:166] provisioning hostname "ha-168758-m02"
	I1204 23:57:03.763632   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:03.763818   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.766973   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.767390   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.767437   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.767603   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.767818   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.768055   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.768271   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.768440   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.768709   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.768726   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758-m02 && echo "ha-168758-m02" | sudo tee /etc/hostname
	I1204 23:57:03.892672   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758-m02
	
	I1204 23:57:03.892702   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.895144   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.895422   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.895442   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.895624   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.895781   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.895901   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.896009   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.896154   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.896362   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.896378   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:57:04.014565   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:57:04.014591   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:57:04.014605   36539 buildroot.go:174] setting up certificates
	I1204 23:57:04.014613   36539 provision.go:84] configureAuth start
	I1204 23:57:04.014620   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:04.014862   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.017595   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.017892   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.017919   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.018050   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.020121   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.020411   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.020432   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.020630   36539 provision.go:143] copyHostCerts
	I1204 23:57:04.020662   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:57:04.020697   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:57:04.020725   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:57:04.020805   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:57:04.020931   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:57:04.020973   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:57:04.020984   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:57:04.021027   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:57:04.021089   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:57:04.021109   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:57:04.021118   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:57:04.021154   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:57:04.021216   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758-m02 san=[127.0.0.1 192.168.39.70 ha-168758-m02 localhost minikube]
	I1204 23:57:04.229700   36539 provision.go:177] copyRemoteCerts
	I1204 23:57:04.229780   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:57:04.229810   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.233253   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.233650   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.233681   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.233956   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.234161   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.234318   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.234434   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.316064   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:57:04.316143   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:57:04.338512   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:57:04.338585   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:57:04.360119   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:57:04.360178   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:57:04.381996   36539 provision.go:87] duration metric: took 367.37237ms to configureAuth
	I1204 23:57:04.382022   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:57:04.382234   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:04.382318   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.384736   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.385043   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.385085   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.385263   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.385456   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.385634   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.385765   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.385893   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:04.386085   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:04.386102   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:57:04.615454   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:57:04.615480   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:57:04.615488   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetURL
	I1204 23:57:04.616607   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using libvirt version 6000000
	I1204 23:57:04.618773   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.619138   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.619174   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.619325   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:57:04.619342   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:57:04.619349   36539 client.go:171] duration metric: took 25.861833034s to LocalClient.Create
	I1204 23:57:04.619371   36539 start.go:167] duration metric: took 25.86188766s to libmachine.API.Create "ha-168758"
	I1204 23:57:04.619385   36539 start.go:293] postStartSetup for "ha-168758-m02" (driver="kvm2")
	I1204 23:57:04.619400   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:57:04.619421   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.619646   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:57:04.619675   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.622076   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.622449   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.622480   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.622622   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.622800   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.622964   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.623116   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.712567   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:57:04.716652   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:57:04.716681   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:57:04.716761   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:57:04.716858   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:57:04.716869   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:57:04.716975   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:57:04.726044   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:57:04.748709   36539 start.go:296] duration metric: took 129.292597ms for postStartSetup
	I1204 23:57:04.748769   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:57:04.749436   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.751961   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.752346   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.752376   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.752616   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:04.752793   36539 start.go:128] duration metric: took 26.013914076s to createHost
	I1204 23:57:04.752815   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.755319   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.755652   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.755682   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.755827   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.755971   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.756086   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.756189   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.756307   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:04.756479   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:04.756492   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:57:04.866426   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356624.824376765
	
	I1204 23:57:04.866448   36539 fix.go:216] guest clock: 1733356624.824376765
	I1204 23:57:04.866456   36539 fix.go:229] Guest: 2024-12-04 23:57:04.824376765 +0000 UTC Remote: 2024-12-04 23:57:04.752806033 +0000 UTC m=+70.097491807 (delta=71.570732ms)
	I1204 23:57:04.866470   36539 fix.go:200] guest clock delta is within tolerance: 71.570732ms
	I1204 23:57:04.866475   36539 start.go:83] releasing machines lock for "ha-168758-m02", held for 26.12770888s
	I1204 23:57:04.866493   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.866780   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.869289   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.869690   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.869723   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.871575   36539 out.go:177] * Found network options:
	I1204 23:57:04.872861   36539 out.go:177]   - NO_PROXY=192.168.39.246
	W1204 23:57:04.874174   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:57:04.874219   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.874854   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.875047   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.875134   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:57:04.875168   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	W1204 23:57:04.875274   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:57:04.875348   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:57:04.875364   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.877779   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878165   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878255   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.878300   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878462   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.878636   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.878704   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.878726   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878789   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.878869   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.878940   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.878980   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.879169   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.879295   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:05.115514   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:57:05.121578   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:57:05.121649   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:57:05.137141   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:57:05.137166   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:57:05.137249   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:57:05.153397   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:57:05.167629   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:57:05.167697   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:57:05.180991   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:57:05.194817   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:57:05.319862   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:57:05.480267   36539 docker.go:233] disabling docker service ...
	I1204 23:57:05.480352   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:57:05.494063   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:57:05.506104   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:57:05.622700   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:57:05.745424   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:57:05.759393   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:57:05.776697   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:57:05.776762   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.787070   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:57:05.787137   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.797124   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.807348   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.816991   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:57:05.826982   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.836497   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.852423   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.862453   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:57:05.871289   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:57:05.871338   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:57:05.883727   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:57:05.893241   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:06.013483   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:57:06.107092   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:57:06.107171   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:57:06.112530   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:57:06.112612   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:57:06.116309   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:57:06.157302   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:57:06.157371   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:57:06.184889   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:57:06.212833   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:57:06.214532   36539 out.go:177]   - env NO_PROXY=192.168.39.246
	I1204 23:57:06.215684   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:06.218318   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:06.218690   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:06.218715   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:06.218889   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:57:06.222805   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:57:06.234260   36539 mustload.go:65] Loading cluster: ha-168758
	I1204 23:57:06.234431   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:06.234663   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:06.234694   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:06.249086   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I1204 23:57:06.249474   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:06.249897   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:06.249916   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:06.250208   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:06.250399   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:57:06.251961   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:57:06.252244   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:06.252279   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:06.266555   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I1204 23:57:06.267012   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:06.267500   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:06.267519   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:06.267807   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:06.267965   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:57:06.268109   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.70
	I1204 23:57:06.268124   36539 certs.go:194] generating shared ca certs ...
	I1204 23:57:06.268142   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.268293   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:57:06.268346   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:57:06.268359   36539 certs.go:256] generating profile certs ...
	I1204 23:57:06.268451   36539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:57:06.268485   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d
	I1204 23:57:06.268506   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.254]
	I1204 23:57:06.433427   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d ...
	I1204 23:57:06.433455   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d: {Name:mk9b5ab41b84365c7c87c13c8ccd0d5c3db76aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.433623   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d ...
	I1204 23:57:06.433643   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d: {Name:mk61c76c74eaed61a46e313e778d5b43d1ba813c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.433739   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:57:06.433896   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:57:06.434089   36539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:57:06.434105   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:57:06.434120   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:57:06.434140   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:57:06.434158   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:57:06.434176   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:57:06.434192   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:57:06.434208   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:57:06.434229   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:57:06.434295   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:57:06.434333   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:57:06.434347   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:57:06.434398   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:57:06.434429   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:57:06.434462   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:57:06.434512   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:57:06.434550   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:57:06.434571   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.434596   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:57:06.434632   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:57:06.437677   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:06.438091   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:57:06.438121   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:06.438323   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:57:06.438484   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:57:06.438627   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:57:06.438715   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:57:06.518370   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 23:57:06.523710   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 23:57:06.534968   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 23:57:06.539161   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1204 23:57:06.549270   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 23:57:06.553348   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 23:57:06.562722   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 23:57:06.566572   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 23:57:06.575817   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 23:57:06.579621   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 23:57:06.588791   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 23:57:06.592406   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 23:57:06.601971   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:57:06.626102   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:57:06.647746   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:57:06.670354   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:57:06.700509   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 23:57:06.722552   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:57:06.745113   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:57:06.767304   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:57:06.789718   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:57:06.812073   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:57:06.834754   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:57:06.857779   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 23:57:06.872839   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1204 23:57:06.887702   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 23:57:06.903420   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 23:57:06.918686   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 23:57:06.933715   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 23:57:06.948744   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 23:57:06.963779   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:57:06.969267   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:57:06.978970   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.982989   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.983047   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.988382   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:57:06.997926   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:57:07.007572   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.011739   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.011796   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.017095   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:57:07.027066   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:57:07.037055   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.041161   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.041220   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.046426   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:57:07.056376   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:57:07.060388   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:57:07.060436   36539 kubeadm.go:934] updating node {m02 192.168.39.70 8443 v1.31.2 crio true true} ...
	I1204 23:57:07.060522   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:57:07.060547   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:57:07.060575   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:57:07.074631   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:57:07.074712   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:57:07.074777   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:57:07.084179   36539 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 23:57:07.084224   36539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 23:57:07.093203   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 23:57:07.093229   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:57:07.093277   36539 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1204 23:57:07.093292   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:57:07.093312   36539 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1204 23:57:07.097086   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 23:57:07.097113   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 23:57:08.011623   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:57:08.011699   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:57:08.017164   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 23:57:08.017200   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 23:57:08.175961   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:57:08.200946   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:57:08.201051   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:57:08.211652   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 23:57:08.211693   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 23:57:08.563181   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 23:57:08.571736   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1204 23:57:08.586849   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:57:08.601801   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 23:57:08.616859   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:57:08.620378   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:57:08.631234   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:08.746289   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:57:08.762474   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:57:08.762951   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:08.763010   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:08.777798   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I1204 23:57:08.778308   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:08.778784   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:08.778801   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:08.779085   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:08.779279   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:57:08.779432   36539 start.go:317] joinCluster: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:57:08.779528   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 23:57:08.779552   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:57:08.782464   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:08.782947   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:57:08.782974   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:08.783150   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:57:08.783306   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:57:08.783439   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:57:08.783593   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:57:08.959692   36539 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:08.959734   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lbtrqy.cl9b45dngbh3w53w --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m02 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443"
	I1204 23:57:30.700354   36539 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lbtrqy.cl9b45dngbh3w53w --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m02 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443": (21.740590437s)
	I1204 23:57:30.700399   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 23:57:31.260769   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758-m02 minikube.k8s.io/updated_at=2024_12_04T23_57_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=false
	I1204 23:57:31.388605   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-168758-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 23:57:31.507247   36539 start.go:319] duration metric: took 22.727809032s to joinCluster
	I1204 23:57:31.507330   36539 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:31.507657   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:31.510077   36539 out.go:177] * Verifying Kubernetes components...
	I1204 23:57:31.511382   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:31.807145   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:57:31.866005   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:57:31.866366   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 23:57:31.866446   36539 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I1204 23:57:31.866697   36539 node_ready.go:35] waiting up to 6m0s for node "ha-168758-m02" to be "Ready" ...
	I1204 23:57:31.866802   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:31.866815   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:31.866826   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:31.866832   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:31.894115   36539 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1204 23:57:32.367519   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:32.367541   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:32.367548   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:32.367552   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:32.372462   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:32.867267   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:32.867297   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:32.867307   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:32.867312   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:32.870593   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:33.367165   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:33.367188   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:33.367199   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:33.367204   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:33.371115   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:33.867446   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:33.867467   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:33.867490   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:33.867496   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:33.895642   36539 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1204 23:57:33.896043   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:34.367204   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:34.367227   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:34.367236   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:34.367241   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:34.370493   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:34.867119   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:34.867142   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:34.867150   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:34.867153   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:34.871875   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:35.367013   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:35.367043   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:35.367054   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:35.367059   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:35.369963   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:35.867691   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:35.867711   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:35.867718   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:35.867722   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:35.873237   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:57:36.366914   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:36.366938   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:36.366946   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:36.366957   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:36.369646   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:36.370291   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:36.867182   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:36.867206   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:36.867216   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:36.867221   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:36.870020   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:37.367042   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:37.367063   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:37.367071   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:37.367074   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:37.370425   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:37.867722   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:37.867746   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:37.867757   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:37.867762   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:37.870832   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:38.367880   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:38.367907   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:38.367919   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:38.367925   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:38.371699   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:38.374216   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:38.866862   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:38.866882   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:38.866889   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:38.866894   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:38.870442   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:39.367314   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:39.367335   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:39.367343   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:39.367347   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:39.370397   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:39.867083   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:39.867112   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:39.867124   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:39.867131   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:39.869764   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.367183   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:40.367204   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:40.367212   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:40.367216   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:40.370122   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.866955   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:40.866977   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:40.866985   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:40.866989   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:40.869893   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.870550   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:41.367838   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:41.367867   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:41.367877   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:41.367883   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:41.371252   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:41.867051   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:41.867074   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:41.867081   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:41.867086   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:41.869735   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:42.367874   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:42.367896   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:42.367904   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:42.367908   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:42.370905   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:42.867902   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:42.867923   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:42.867930   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:42.867936   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:42.871077   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:42.871630   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:43.367020   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:43.367041   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:43.367049   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:43.367053   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:43.369800   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:43.867917   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:43.867935   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:43.867943   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:43.867948   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:43.871058   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:44.366921   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:44.366944   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:44.366952   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:44.366956   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:44.369844   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:44.867849   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:44.867871   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:44.867878   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:44.867882   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:44.871084   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:45.367038   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:45.367064   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:45.367072   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:45.367076   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:45.370136   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:45.370824   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:45.867201   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:45.867222   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:45.867230   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:45.867234   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:45.870440   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:46.367088   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:46.367111   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:46.367122   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:46.367128   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:46.370360   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:46.867494   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:46.867515   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:46.867523   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:46.867527   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:46.870477   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:47.367351   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:47.367377   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:47.367388   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:47.367395   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:47.370421   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:47.371066   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:47.867351   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:47.867371   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:47.867376   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:47.867380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:47.870561   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:48.367177   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:48.367201   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:48.367208   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:48.367212   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:48.370109   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:48.867913   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:48.867940   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:48.867951   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:48.867957   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:48.870839   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:49.367731   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:49.367752   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:49.367763   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:49.367775   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:49.371251   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:49.371965   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:49.867061   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:49.867082   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:49.867091   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:49.867097   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:49.870692   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.367727   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:50.367748   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.367756   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.367760   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.371359   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.372130   36539 node_ready.go:49] node "ha-168758-m02" has status "Ready":"True"
	I1204 23:57:50.372146   36539 node_ready.go:38] duration metric: took 18.505433084s for node "ha-168758-m02" to be "Ready" ...
	I1204 23:57:50.372154   36539 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:57:50.372222   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:50.372232   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.372239   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.372245   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.376642   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:50.381842   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.381907   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rdbhr
	I1204 23:57:50.381916   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.381922   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.381927   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.384768   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.385355   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.385369   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.385375   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.385379   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.387687   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.388085   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.388100   36539 pod_ready.go:82] duration metric: took 6.236474ms for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.388109   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.388153   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgfvm
	I1204 23:57:50.388161   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.388167   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.388170   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.390871   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.391380   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.391394   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.391400   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.391405   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.394126   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.394638   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.394652   36539 pod_ready.go:82] duration metric: took 6.538378ms for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.394659   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.394701   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758
	I1204 23:57:50.394709   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.394715   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.394720   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.397059   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.397459   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.397471   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.397478   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.397482   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.399735   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.400125   36539 pod_ready.go:93] pod "etcd-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.400139   36539 pod_ready.go:82] duration metric: took 5.474765ms for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.400148   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.400188   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m02
	I1204 23:57:50.400196   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.400202   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.400206   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.402193   36539 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 23:57:50.402740   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:50.402752   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.402760   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.402765   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.404874   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.405354   36539 pod_ready.go:93] pod "etcd-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.405371   36539 pod_ready.go:82] duration metric: took 5.218099ms for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.405382   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.568781   36539 request.go:632] Waited for 163.320423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:57:50.568837   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:57:50.568845   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.568853   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.568860   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.572195   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.768439   36539 request.go:632] Waited for 195.374469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.768509   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.768529   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.768537   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.768542   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.771909   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.772411   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.772428   36539 pod_ready.go:82] duration metric: took 367.03986ms for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.772438   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.968760   36539 request.go:632] Waited for 196.264628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:57:50.968844   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:57:50.968862   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.968875   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.968883   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.972428   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.168517   36539 request.go:632] Waited for 195.379762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.168570   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.168575   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.168582   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.168586   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.171966   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.172839   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.172864   36539 pod_ready.go:82] duration metric: took 400.419668ms for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.172878   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.368601   36539 request.go:632] Waited for 195.644185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:57:51.368673   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:57:51.368681   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.368689   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.368696   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.371786   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.568796   36539 request.go:632] Waited for 196.387928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:51.568847   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:51.568853   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.568859   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.568862   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.572174   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.572929   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.572951   36539 pod_ready.go:82] duration metric: took 400.064616ms for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.572965   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.767794   36539 request.go:632] Waited for 194.737211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:57:51.767865   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:57:51.767871   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.767885   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.767892   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.771726   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.967706   36539 request.go:632] Waited for 195.275144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.967798   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.967806   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.967816   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.967825   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.971641   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.972143   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.972162   36539 pod_ready.go:82] duration metric: took 399.188822ms for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.972176   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.168279   36539 request.go:632] Waited for 196.037931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:57:52.168357   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:57:52.168364   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.168371   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.168380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.172095   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.368165   36539 request.go:632] Waited for 195.357162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:52.368231   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:52.368236   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.368244   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.368246   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.371117   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:52.371718   36539 pod_ready.go:93] pod "kube-proxy-s57v4" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:52.371734   36539 pod_ready.go:82] duration metric: took 399.551881ms for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.371744   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.568001   36539 request.go:632] Waited for 196.176636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:57:52.568074   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:57:52.568080   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.568088   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.568092   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.571383   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.768625   36539 request.go:632] Waited for 196.402632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:52.768686   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:52.768691   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.768702   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.768706   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.772096   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.772550   36539 pod_ready.go:93] pod "kube-proxy-vjhjm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:52.772567   36539 pod_ready.go:82] duration metric: took 400.817883ms for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.772577   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.968672   36539 request.go:632] Waited for 196.025622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:57:52.968728   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:57:52.968736   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.968747   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.968753   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.972192   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.168261   36539 request.go:632] Waited for 195.422832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:53.168312   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:53.168321   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.168329   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.168332   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.171764   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.172376   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:53.172393   36539 pod_ready.go:82] duration metric: took 399.810959ms for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.172403   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.368473   36539 request.go:632] Waited for 196.003907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:57:53.368564   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:57:53.368575   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.368586   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.368598   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.371851   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.567754   36539 request.go:632] Waited for 195.293783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:53.567819   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:53.567824   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.567831   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.567836   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.570770   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:53.571309   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:53.571333   36539 pod_ready.go:82] duration metric: took 398.922957ms for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.571348   36539 pod_ready.go:39] duration metric: took 3.199181638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:57:53.571367   36539 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:57:53.571414   36539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:57:53.585872   36539 api_server.go:72] duration metric: took 22.078506409s to wait for apiserver process to appear ...
	I1204 23:57:53.585898   36539 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:57:53.585919   36539 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1204 23:57:53.591379   36539 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1204 23:57:53.591435   36539 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1204 23:57:53.591440   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.591447   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.591452   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.592362   36539 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 23:57:53.592455   36539 api_server.go:141] control plane version: v1.31.2
	I1204 23:57:53.592472   36539 api_server.go:131] duration metric: took 6.568025ms to wait for apiserver health ...
	I1204 23:57:53.592481   36539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:57:53.767827   36539 request.go:632] Waited for 175.285413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:53.767882   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:53.767888   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.767897   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.767903   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.772373   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:53.776877   36539 system_pods.go:59] 17 kube-system pods found
	I1204 23:57:53.776902   36539 system_pods.go:61] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:57:53.776908   36539 system_pods.go:61] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:57:53.776912   36539 system_pods.go:61] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:57:53.776916   36539 system_pods.go:61] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:57:53.776920   36539 system_pods.go:61] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:57:53.776924   36539 system_pods.go:61] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:57:53.776927   36539 system_pods.go:61] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:57:53.776930   36539 system_pods.go:61] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:57:53.776934   36539 system_pods.go:61] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:57:53.776937   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:57:53.776941   36539 system_pods.go:61] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:57:53.776946   36539 system_pods.go:61] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:57:53.776949   36539 system_pods.go:61] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:57:53.776954   36539 system_pods.go:61] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:57:53.776956   36539 system_pods.go:61] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:57:53.776962   36539 system_pods.go:61] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:57:53.776965   36539 system_pods.go:61] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:57:53.776972   36539 system_pods.go:74] duration metric: took 184.481749ms to wait for pod list to return data ...
	I1204 23:57:53.777015   36539 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:57:53.968124   36539 request.go:632] Waited for 191.035378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:57:53.968194   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:57:53.968199   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.968206   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.968213   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.971592   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.971802   36539 default_sa.go:45] found service account: "default"
	I1204 23:57:53.971821   36539 default_sa.go:55] duration metric: took 194.795603ms for default service account to be created ...
	I1204 23:57:53.971829   36539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:57:54.168141   36539 request.go:632] Waited for 196.235523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:54.168201   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:54.168209   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:54.168224   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:54.168233   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:54.172658   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:54.176765   36539 system_pods.go:86] 17 kube-system pods found
	I1204 23:57:54.176792   36539 system_pods.go:89] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:57:54.176798   36539 system_pods.go:89] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:57:54.176803   36539 system_pods.go:89] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:57:54.176806   36539 system_pods.go:89] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:57:54.176809   36539 system_pods.go:89] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:57:54.176813   36539 system_pods.go:89] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:57:54.176816   36539 system_pods.go:89] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:57:54.176819   36539 system_pods.go:89] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:57:54.176823   36539 system_pods.go:89] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:57:54.176835   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:57:54.176843   36539 system_pods.go:89] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:57:54.176846   36539 system_pods.go:89] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:57:54.176850   36539 system_pods.go:89] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:57:54.176853   36539 system_pods.go:89] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:57:54.176856   36539 system_pods.go:89] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:57:54.176859   36539 system_pods.go:89] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:57:54.176862   36539 system_pods.go:89] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:57:54.176867   36539 system_pods.go:126] duration metric: took 205.031013ms to wait for k8s-apps to be running ...
	I1204 23:57:54.176876   36539 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:57:54.176915   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:57:54.192123   36539 system_svc.go:56] duration metric: took 15.235714ms WaitForService to wait for kubelet
	I1204 23:57:54.192153   36539 kubeadm.go:582] duration metric: took 22.684789622s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:57:54.192178   36539 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:57:54.368510   36539 request.go:632] Waited for 176.251249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1204 23:57:54.368569   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1204 23:57:54.368574   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:54.368581   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:54.368586   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:54.371935   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:54.372940   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:57:54.372962   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:57:54.372973   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:57:54.372977   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:57:54.372982   36539 node_conditions.go:105] duration metric: took 180.798569ms to run NodePressure ...
	I1204 23:57:54.372992   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:57:54.373014   36539 start.go:255] writing updated cluster config ...
	I1204 23:57:54.375210   36539 out.go:201] 
	I1204 23:57:54.376552   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:54.376634   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:54.378003   36539 out.go:177] * Starting "ha-168758-m03" control-plane node in "ha-168758" cluster
	I1204 23:57:54.379143   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:57:54.379161   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:57:54.379243   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:57:54.379255   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:57:54.379336   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:54.379486   36539 start.go:360] acquireMachinesLock for ha-168758-m03: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:57:54.379524   36539 start.go:364] duration metric: took 21.433µs to acquireMachinesLock for "ha-168758-m03"
	I1204 23:57:54.379541   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:54.379640   36539 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1204 23:57:54.382003   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:57:54.382100   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:54.382136   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:54.396586   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I1204 23:57:54.397026   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:54.397468   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:54.397491   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:54.397753   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:54.397933   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:57:54.398074   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:57:54.398206   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:57:54.398237   36539 client.go:168] LocalClient.Create starting
	I1204 23:57:54.398271   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:57:54.398314   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:57:54.398335   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:57:54.398396   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:57:54.398425   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:57:54.398442   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:57:54.398467   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:57:54.398479   36539 main.go:141] libmachine: (ha-168758-m03) Calling .PreCreateCheck
	I1204 23:57:54.398600   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:57:54.398924   36539 main.go:141] libmachine: Creating machine...
	I1204 23:57:54.398936   36539 main.go:141] libmachine: (ha-168758-m03) Calling .Create
	I1204 23:57:54.399103   36539 main.go:141] libmachine: (ha-168758-m03) Creating KVM machine...
	I1204 23:57:54.400268   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found existing default KVM network
	I1204 23:57:54.400380   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found existing private KVM network mk-ha-168758
	I1204 23:57:54.400501   36539 main.go:141] libmachine: (ha-168758-m03) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 ...
	I1204 23:57:54.400521   36539 main.go:141] libmachine: (ha-168758-m03) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:57:54.400579   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.400496   37304 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:57:54.400687   36539 main.go:141] libmachine: (ha-168758-m03) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:57:54.638854   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.638724   37304 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa...
	I1204 23:57:54.688376   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.688260   37304 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/ha-168758-m03.rawdisk...
	I1204 23:57:54.688404   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Writing magic tar header
	I1204 23:57:54.688416   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Writing SSH key tar header
	I1204 23:57:54.688424   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.688388   37304 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 ...
	I1204 23:57:54.688510   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03
	I1204 23:57:54.688554   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 (perms=drwx------)
	I1204 23:57:54.688585   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:57:54.688596   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:57:54.688610   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:57:54.688624   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:57:54.688642   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:57:54.688652   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:57:54.688658   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:57:54.688668   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:57:54.688679   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:57:54.688694   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:57:54.688711   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home
	I1204 23:57:54.688721   36539 main.go:141] libmachine: (ha-168758-m03) Creating domain...
	I1204 23:57:54.688733   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Skipping /home - not owner
	I1204 23:57:54.689655   36539 main.go:141] libmachine: (ha-168758-m03) define libvirt domain using xml: 
	I1204 23:57:54.689672   36539 main.go:141] libmachine: (ha-168758-m03) <domain type='kvm'>
	I1204 23:57:54.689678   36539 main.go:141] libmachine: (ha-168758-m03)   <name>ha-168758-m03</name>
	I1204 23:57:54.689682   36539 main.go:141] libmachine: (ha-168758-m03)   <memory unit='MiB'>2200</memory>
	I1204 23:57:54.689687   36539 main.go:141] libmachine: (ha-168758-m03)   <vcpu>2</vcpu>
	I1204 23:57:54.689691   36539 main.go:141] libmachine: (ha-168758-m03)   <features>
	I1204 23:57:54.689715   36539 main.go:141] libmachine: (ha-168758-m03)     <acpi/>
	I1204 23:57:54.689732   36539 main.go:141] libmachine: (ha-168758-m03)     <apic/>
	I1204 23:57:54.689738   36539 main.go:141] libmachine: (ha-168758-m03)     <pae/>
	I1204 23:57:54.689742   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.689747   36539 main.go:141] libmachine: (ha-168758-m03)   </features>
	I1204 23:57:54.689752   36539 main.go:141] libmachine: (ha-168758-m03)   <cpu mode='host-passthrough'>
	I1204 23:57:54.689758   36539 main.go:141] libmachine: (ha-168758-m03)   
	I1204 23:57:54.689763   36539 main.go:141] libmachine: (ha-168758-m03)   </cpu>
	I1204 23:57:54.689771   36539 main.go:141] libmachine: (ha-168758-m03)   <os>
	I1204 23:57:54.689775   36539 main.go:141] libmachine: (ha-168758-m03)     <type>hvm</type>
	I1204 23:57:54.689783   36539 main.go:141] libmachine: (ha-168758-m03)     <boot dev='cdrom'/>
	I1204 23:57:54.689786   36539 main.go:141] libmachine: (ha-168758-m03)     <boot dev='hd'/>
	I1204 23:57:54.689799   36539 main.go:141] libmachine: (ha-168758-m03)     <bootmenu enable='no'/>
	I1204 23:57:54.689805   36539 main.go:141] libmachine: (ha-168758-m03)   </os>
	I1204 23:57:54.689812   36539 main.go:141] libmachine: (ha-168758-m03)   <devices>
	I1204 23:57:54.689817   36539 main.go:141] libmachine: (ha-168758-m03)     <disk type='file' device='cdrom'>
	I1204 23:57:54.689828   36539 main.go:141] libmachine: (ha-168758-m03)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/boot2docker.iso'/>
	I1204 23:57:54.689833   36539 main.go:141] libmachine: (ha-168758-m03)       <target dev='hdc' bus='scsi'/>
	I1204 23:57:54.689840   36539 main.go:141] libmachine: (ha-168758-m03)       <readonly/>
	I1204 23:57:54.689844   36539 main.go:141] libmachine: (ha-168758-m03)     </disk>
	I1204 23:57:54.689852   36539 main.go:141] libmachine: (ha-168758-m03)     <disk type='file' device='disk'>
	I1204 23:57:54.689857   36539 main.go:141] libmachine: (ha-168758-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:57:54.689867   36539 main.go:141] libmachine: (ha-168758-m03)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/ha-168758-m03.rawdisk'/>
	I1204 23:57:54.689875   36539 main.go:141] libmachine: (ha-168758-m03)       <target dev='hda' bus='virtio'/>
	I1204 23:57:54.689882   36539 main.go:141] libmachine: (ha-168758-m03)     </disk>
	I1204 23:57:54.689886   36539 main.go:141] libmachine: (ha-168758-m03)     <interface type='network'>
	I1204 23:57:54.689891   36539 main.go:141] libmachine: (ha-168758-m03)       <source network='mk-ha-168758'/>
	I1204 23:57:54.689898   36539 main.go:141] libmachine: (ha-168758-m03)       <model type='virtio'/>
	I1204 23:57:54.689903   36539 main.go:141] libmachine: (ha-168758-m03)     </interface>
	I1204 23:57:54.689910   36539 main.go:141] libmachine: (ha-168758-m03)     <interface type='network'>
	I1204 23:57:54.689917   36539 main.go:141] libmachine: (ha-168758-m03)       <source network='default'/>
	I1204 23:57:54.689924   36539 main.go:141] libmachine: (ha-168758-m03)       <model type='virtio'/>
	I1204 23:57:54.689928   36539 main.go:141] libmachine: (ha-168758-m03)     </interface>
	I1204 23:57:54.689935   36539 main.go:141] libmachine: (ha-168758-m03)     <serial type='pty'>
	I1204 23:57:54.689957   36539 main.go:141] libmachine: (ha-168758-m03)       <target port='0'/>
	I1204 23:57:54.689973   36539 main.go:141] libmachine: (ha-168758-m03)     </serial>
	I1204 23:57:54.689983   36539 main.go:141] libmachine: (ha-168758-m03)     <console type='pty'>
	I1204 23:57:54.689987   36539 main.go:141] libmachine: (ha-168758-m03)       <target type='serial' port='0'/>
	I1204 23:57:54.689993   36539 main.go:141] libmachine: (ha-168758-m03)     </console>
	I1204 23:57:54.689997   36539 main.go:141] libmachine: (ha-168758-m03)     <rng model='virtio'>
	I1204 23:57:54.690005   36539 main.go:141] libmachine: (ha-168758-m03)       <backend model='random'>/dev/random</backend>
	I1204 23:57:54.690009   36539 main.go:141] libmachine: (ha-168758-m03)     </rng>
	I1204 23:57:54.690013   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.690016   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.690021   36539 main.go:141] libmachine: (ha-168758-m03)   </devices>
	I1204 23:57:54.690026   36539 main.go:141] libmachine: (ha-168758-m03) </domain>
	I1204 23:57:54.690035   36539 main.go:141] libmachine: (ha-168758-m03) 
	I1204 23:57:54.696711   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:45:00:1a in network default
	I1204 23:57:54.697413   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:54.697439   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring networks are active...
	I1204 23:57:54.698233   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring network default is active
	I1204 23:57:54.698580   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring network mk-ha-168758 is active
	I1204 23:57:54.698931   36539 main.go:141] libmachine: (ha-168758-m03) Getting domain xml...
	I1204 23:57:54.699649   36539 main.go:141] libmachine: (ha-168758-m03) Creating domain...
	I1204 23:57:55.920334   36539 main.go:141] libmachine: (ha-168758-m03) Waiting to get IP...
	I1204 23:57:55.921079   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:55.921466   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:55.921501   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:55.921456   37304 retry.go:31] will retry after 209.695163ms: waiting for machine to come up
	I1204 23:57:56.132761   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.133191   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.133212   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.133145   37304 retry.go:31] will retry after 335.842051ms: waiting for machine to come up
	I1204 23:57:56.470639   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.471086   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.471135   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.471032   37304 retry.go:31] will retry after 296.078556ms: waiting for machine to come up
	I1204 23:57:56.768275   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.768757   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.768777   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.768709   37304 retry.go:31] will retry after 507.499948ms: waiting for machine to come up
	I1204 23:57:57.277223   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:57.277662   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:57.277692   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:57.277613   37304 retry.go:31] will retry after 706.137269ms: waiting for machine to come up
	I1204 23:57:57.985335   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:57.985740   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:57.985763   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:57.985694   37304 retry.go:31] will retry after 901.171563ms: waiting for machine to come up
	I1204 23:57:58.888734   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:58.889195   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:58.889217   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:58.889148   37304 retry.go:31] will retry after 1.040272258s: waiting for machine to come up
	I1204 23:57:59.931201   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:59.931634   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:59.931662   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:59.931580   37304 retry.go:31] will retry after 1.47549599s: waiting for machine to come up
	I1204 23:58:01.408702   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:01.409147   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:01.409198   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:01.409117   37304 retry.go:31] will retry after 1.529862151s: waiting for machine to come up
	I1204 23:58:02.941039   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:02.941538   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:02.941562   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:02.941502   37304 retry.go:31] will retry after 1.818025113s: waiting for machine to come up
	I1204 23:58:04.760910   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:04.761392   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:04.761424   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:04.761351   37304 retry.go:31] will retry after 1.812557198s: waiting for machine to come up
	I1204 23:58:06.576024   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:06.576523   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:06.576553   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:06.576452   37304 retry.go:31] will retry after 3.417018451s: waiting for machine to come up
	I1204 23:58:09.994797   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:09.995228   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:09.995255   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:09.995175   37304 retry.go:31] will retry after 3.708395185s: waiting for machine to come up
	I1204 23:58:13.707953   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:13.708378   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:13.708407   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:13.708331   37304 retry.go:31] will retry after 5.536922009s: waiting for machine to come up
	I1204 23:58:19.251010   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.251548   36539 main.go:141] libmachine: (ha-168758-m03) Found IP for machine: 192.168.39.211
	I1204 23:58:19.251581   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has current primary IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.251590   36539 main.go:141] libmachine: (ha-168758-m03) Reserving static IP address...
	I1204 23:58:19.252021   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find host DHCP lease matching {name: "ha-168758-m03", mac: "52:54:00:25:7a:4c", ip: "192.168.39.211"} in network mk-ha-168758
	I1204 23:58:19.326562   36539 main.go:141] libmachine: (ha-168758-m03) Reserved static IP address: 192.168.39.211
	I1204 23:58:19.326591   36539 main.go:141] libmachine: (ha-168758-m03) Waiting for SSH to be available...
	I1204 23:58:19.326600   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Getting to WaitForSSH function...
	I1204 23:58:19.329626   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.330027   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.330055   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.330234   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using SSH client type: external
	I1204 23:58:19.330259   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa (-rw-------)
	I1204 23:58:19.330287   36539 main.go:141] libmachine: (ha-168758-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:58:19.330300   36539 main.go:141] libmachine: (ha-168758-m03) DBG | About to run SSH command:
	I1204 23:58:19.330316   36539 main.go:141] libmachine: (ha-168758-m03) DBG | exit 0
	I1204 23:58:19.458033   36539 main.go:141] libmachine: (ha-168758-m03) DBG | SSH cmd err, output: <nil>: 
	I1204 23:58:19.458296   36539 main.go:141] libmachine: (ha-168758-m03) KVM machine creation complete!
	I1204 23:58:19.458639   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:58:19.459211   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:19.459418   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:19.459604   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:58:19.459621   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetState
	I1204 23:58:19.460815   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:58:19.460827   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:58:19.460832   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:58:19.460837   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.463322   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.463682   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.463709   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.463819   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.463998   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.464138   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.464299   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.464466   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.464650   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.464661   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:58:19.569758   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:19.569785   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:58:19.569794   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.573011   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.573416   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.573434   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.573531   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.573755   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.573966   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.574140   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.574339   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.574549   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.574565   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:58:19.678634   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:58:19.678713   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:58:19.678731   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:58:19.678746   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.679001   36539 buildroot.go:166] provisioning hostname "ha-168758-m03"
	I1204 23:58:19.679025   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.679187   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.681791   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.682166   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.682203   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.682354   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.682571   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.682718   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.682893   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.683080   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.683252   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.683265   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758-m03 && echo "ha-168758-m03" | sudo tee /etc/hostname
	I1204 23:58:19.805397   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758-m03
	
	I1204 23:58:19.805429   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.808431   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.808856   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.808878   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.809012   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.809177   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.809288   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.809401   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.809650   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.809863   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.809887   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:58:19.923097   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:19.923127   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:58:19.923146   36539 buildroot.go:174] setting up certificates
	I1204 23:58:19.923155   36539 provision.go:84] configureAuth start
	I1204 23:58:19.923166   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.923476   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:19.926158   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.926547   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.926572   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.926691   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.928951   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.929352   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.929383   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.929531   36539 provision.go:143] copyHostCerts
	I1204 23:58:19.929556   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:58:19.929583   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:58:19.929592   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:58:19.929653   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:58:19.929721   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:58:19.929747   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:58:19.929754   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:58:19.929777   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:58:19.929820   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:58:19.929837   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:58:19.929844   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:58:19.929863   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:58:19.929908   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758-m03 san=[127.0.0.1 192.168.39.211 ha-168758-m03 localhost minikube]
	I1204 23:58:20.121053   36539 provision.go:177] copyRemoteCerts
	I1204 23:58:20.121105   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:58:20.121126   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.123934   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.124493   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.124528   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.124773   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.124964   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.125130   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.125289   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.208033   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:58:20.208112   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:58:20.233800   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:58:20.233885   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:58:20.258548   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:58:20.258616   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:58:20.282884   36539 provision.go:87] duration metric: took 359.716273ms to configureAuth
	I1204 23:58:20.282915   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:58:20.283121   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:20.283188   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.285836   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.286252   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.286287   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.286487   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.286665   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.286810   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.287008   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.287190   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:20.287357   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:20.287370   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:58:20.518772   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:58:20.518795   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:58:20.518802   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetURL
	I1204 23:58:20.520187   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using libvirt version 6000000
	I1204 23:58:20.522653   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.523105   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.523133   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.523291   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:58:20.523307   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:58:20.523315   36539 client.go:171] duration metric: took 26.125066489s to LocalClient.Create
	I1204 23:58:20.523341   36539 start.go:167] duration metric: took 26.125137032s to libmachine.API.Create "ha-168758"
	I1204 23:58:20.523357   36539 start.go:293] postStartSetup for "ha-168758-m03" (driver="kvm2")
	I1204 23:58:20.523373   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:58:20.523396   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.523640   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:58:20.523662   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.525716   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.526081   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.526107   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.526258   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.526431   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.526593   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.526718   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.611923   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:58:20.616136   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:58:20.616164   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:58:20.616222   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:58:20.616301   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:58:20.616310   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:58:20.616393   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:58:20.625581   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:58:20.649393   36539 start.go:296] duration metric: took 126.018678ms for postStartSetup
	I1204 23:58:20.649447   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:58:20.650084   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:20.652794   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.653173   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.653203   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.653515   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:58:20.653706   36539 start.go:128] duration metric: took 26.274055056s to createHost
	I1204 23:58:20.653726   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.657371   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.657844   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.657871   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.658096   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.658293   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.658453   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.658591   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.658709   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:20.658880   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:20.658893   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:58:20.762401   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356700.741731450
	
	I1204 23:58:20.762429   36539 fix.go:216] guest clock: 1733356700.741731450
	I1204 23:58:20.762440   36539 fix.go:229] Guest: 2024-12-04 23:58:20.74173145 +0000 UTC Remote: 2024-12-04 23:58:20.653717113 +0000 UTC m=+145.998402889 (delta=88.014337ms)
	I1204 23:58:20.762459   36539 fix.go:200] guest clock delta is within tolerance: 88.014337ms
	I1204 23:58:20.762465   36539 start.go:83] releasing machines lock for "ha-168758-m03", held for 26.382931771s
	I1204 23:58:20.762482   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.762743   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:20.765725   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.766178   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.766206   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.769126   36539 out.go:177] * Found network options:
	I1204 23:58:20.770828   36539 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.70
	W1204 23:58:20.772463   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 23:58:20.772484   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:58:20.772496   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773077   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773256   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773351   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:58:20.773389   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	W1204 23:58:20.773530   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 23:58:20.773562   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:58:20.773617   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:58:20.773705   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.776558   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.776588   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.776979   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.777009   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.777099   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.777132   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.777175   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.777373   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.777397   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.777602   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.777604   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.777735   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.777749   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.777829   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:21.014947   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:58:21.021071   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:58:21.021137   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:58:21.039078   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:58:21.039110   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:58:21.039177   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:58:21.058712   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:58:21.073746   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:58:21.073808   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:58:21.087957   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:58:21.101618   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:58:21.221628   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:58:21.378405   36539 docker.go:233] disabling docker service ...
	I1204 23:58:21.378477   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:58:21.394967   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:58:21.408722   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:58:21.562596   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:58:21.684473   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:58:21.698325   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:58:21.716302   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:58:21.716372   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.726130   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:58:21.726192   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.737846   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.748038   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.758637   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:58:21.769448   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.779664   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.796293   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.806603   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:58:21.816322   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:58:21.816373   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:58:21.829466   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:58:21.839041   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:21.954707   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:58:22.043532   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:58:22.043590   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:58:22.048235   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:58:22.048287   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:58:22.051918   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:58:22.091888   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:58:22.091959   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:58:22.119246   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:58:22.147844   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:58:22.149672   36539 out.go:177]   - env NO_PROXY=192.168.39.246
	I1204 23:58:22.151366   36539 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.70
	I1204 23:58:22.152919   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:22.155788   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:22.156132   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:22.156156   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:22.156371   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:58:22.161001   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:22.172781   36539 mustload.go:65] Loading cluster: ha-168758
	I1204 23:58:22.173021   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:22.173277   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:22.173312   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:22.188201   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I1204 23:58:22.188638   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:22.189068   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:22.189086   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:22.189487   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:22.189677   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:58:22.191201   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:58:22.191497   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:22.191540   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:22.206205   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I1204 23:58:22.206662   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:22.207121   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:22.207141   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:22.207423   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:22.207583   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:58:22.207723   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.211
	I1204 23:58:22.207733   36539 certs.go:194] generating shared ca certs ...
	I1204 23:58:22.207746   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.207850   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:58:22.207885   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:58:22.207894   36539 certs.go:256] generating profile certs ...
	I1204 23:58:22.207958   36539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:58:22.207980   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0
	I1204 23:58:22.207995   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.211 192.168.39.254]
	I1204 23:58:22.288746   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 ...
	I1204 23:58:22.288772   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0: {Name:mk226e78394d9a1d1481fe02645dfa0bdae47c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.288924   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0 ...
	I1204 23:58:22.288937   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0: {Name:mk254430c45779d3847a67c61786cc3c2c6ae80f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.289003   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:58:22.289131   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:58:22.289249   36539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:58:22.289263   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:58:22.289275   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:58:22.289288   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:58:22.289301   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:58:22.289314   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:58:22.289328   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:58:22.289344   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:58:22.306093   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:58:22.306200   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:58:22.306239   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:58:22.306253   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:58:22.306298   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:58:22.306334   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:58:22.306370   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:58:22.306426   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:58:22.306471   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.306493   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.306511   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.306552   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:58:22.309680   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:22.310101   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:58:22.310130   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:22.310297   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:58:22.310547   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:58:22.310740   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:58:22.310907   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:58:22.386340   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 23:58:22.392285   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 23:58:22.405695   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 23:58:22.409772   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1204 23:58:22.423743   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 23:58:22.427677   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 23:58:22.437721   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 23:58:22.441623   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 23:58:22.452543   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 23:58:22.461102   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 23:58:22.471271   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 23:58:22.477234   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 23:58:22.491216   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:58:22.516244   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:58:22.540200   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:58:22.565503   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:58:22.589637   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1204 23:58:22.614321   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 23:58:22.637487   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:58:22.661061   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:58:22.683528   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:58:22.706445   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:58:22.729085   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:58:22.750946   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 23:58:22.766659   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1204 23:58:22.782041   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 23:58:22.797608   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 23:58:22.812913   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 23:58:22.828464   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 23:58:22.843593   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 23:58:22.859636   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:58:22.865420   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:58:22.876063   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.880415   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.880469   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.885958   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:58:22.896219   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:58:22.906770   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.910894   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.910951   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.916407   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:58:22.926735   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:58:22.937057   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.941137   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.941190   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.946692   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:58:22.959164   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:58:22.963452   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:58:22.963502   36539 kubeadm.go:934] updating node {m03 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1204 23:58:22.963582   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:58:22.963624   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:58:22.963666   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:58:22.979996   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:58:22.980050   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:58:22.980122   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:58:22.989754   36539 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 23:58:22.989828   36539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 23:58:22.999376   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 23:58:22.999414   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 23:58:22.999429   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:58:22.999442   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:58:22.999460   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 23:58:22.999481   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:58:22.999484   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:58:22.999564   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:58:23.016751   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 23:58:23.016792   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 23:58:23.016813   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:58:23.016873   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 23:58:23.016893   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 23:58:23.016908   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:58:23.044749   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 23:58:23.044784   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 23:58:23.845384   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 23:58:23.856608   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 23:58:23.873561   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:58:23.888925   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 23:58:23.904283   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:58:23.908166   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:23.921626   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:24.054957   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:24.071310   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:58:24.071774   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:24.071826   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:24.087392   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I1204 23:58:24.087908   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:24.088426   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:24.088445   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:24.088872   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:24.089072   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:58:24.089234   36539 start.go:317] joinCluster: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:58:24.089406   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 23:58:24.089426   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:58:24.092795   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:24.093319   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:58:24.093346   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:24.093418   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:58:24.093603   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:58:24.093787   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:58:24.093916   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:58:24.257067   36539 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:58:24.257133   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8wphde.b7r2hfejhmn4frsm --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443"
	I1204 23:58:46.569250   36539 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8wphde.b7r2hfejhmn4frsm --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443": (22.312080737s)
	I1204 23:58:46.569302   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 23:58:47.113692   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758-m03 minikube.k8s.io/updated_at=2024_12_04T23_58_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=false
	I1204 23:58:47.254473   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-168758-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 23:58:47.374583   36539 start.go:319] duration metric: took 23.285344776s to joinCluster
	I1204 23:58:47.374658   36539 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:58:47.375005   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:47.375915   36539 out.go:177] * Verifying Kubernetes components...
	I1204 23:58:47.376822   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:47.645069   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:47.700465   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:58:47.700774   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 23:58:47.700882   36539 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I1204 23:58:47.701229   36539 node_ready.go:35] waiting up to 6m0s for node "ha-168758-m03" to be "Ready" ...
	I1204 23:58:47.701326   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:47.701346   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:47.701361   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:47.701372   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:47.704946   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:48.202242   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:48.202269   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:48.202277   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:48.202281   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:48.205645   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:48.701516   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:48.701536   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:48.701543   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:48.701547   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:48.704919   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.202192   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:49.202215   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:49.202225   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:49.202231   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:49.205567   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.701894   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:49.701912   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:49.701920   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:49.701928   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:49.705415   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.706113   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:50.201921   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:50.201950   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:50.201958   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:50.201963   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:50.205192   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:50.701850   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:50.701878   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:50.701890   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:50.701897   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:50.705478   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:51.201426   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:51.201446   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:51.201457   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:51.201463   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:51.204423   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:51.702262   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:51.702285   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:51.702293   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:51.702297   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:51.706096   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:51.706649   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:52.201560   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:52.201578   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:52.201585   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:52.201590   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:52.204456   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:52.701709   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:52.701729   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:52.701737   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:52.701742   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:52.705302   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:53.201587   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:53.201627   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:53.201637   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:53.201642   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:53.206081   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:58:53.702110   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:53.702136   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:53.702146   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:53.702151   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:53.707592   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:58:53.708884   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:54.202266   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:54.202287   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:54.202299   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:54.202303   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:54.205388   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:54.702184   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:54.702205   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:54.702212   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:54.702217   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:54.705854   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:55.202090   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:55.202109   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:55.202117   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:55.202122   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:55.205278   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:55.702366   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:55.702386   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:55.702393   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:55.702398   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:55.705911   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:56.202018   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:56.202039   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:56.202048   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:56.202052   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:56.205205   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:56.205770   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:56.702046   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:56.702068   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:56.702077   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:56.702079   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:56.705883   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:57.201534   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:57.201553   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:57.201560   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:57.201565   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:57.204718   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:57.702166   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:57.702192   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:57.702207   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:57.702215   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:57.706100   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:58.202178   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:58.202197   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:58.202207   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:58.202213   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:58.205266   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:58.205989   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:58.702432   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:58.702453   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:58.702461   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:58.702465   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:58.706164   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:59.201677   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:59.201698   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:59.201706   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:59.201710   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:59.204712   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:59.702080   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:59.702106   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:59.702114   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:59.702118   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:59.705833   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:00.202377   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:00.202398   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:00.202405   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:00.202410   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:00.204828   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:00.701624   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:00.701644   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:00.701652   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:00.701655   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:00.704805   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:00.705308   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:01.201668   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:01.201694   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:01.201704   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:01.201710   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:01.205149   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:01.702232   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:01.702261   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:01.702269   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:01.702275   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:01.706624   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:02.201616   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:02.201640   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:02.201652   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:02.201658   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:02.213640   36539 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1204 23:59:02.701482   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:02.701500   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:02.701507   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:02.701511   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:02.707401   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:59:02.707932   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:03.202348   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:03.202370   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:03.202382   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:03.202386   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:03.206086   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:03.702128   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:03.702153   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:03.702167   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:03.702176   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:03.705764   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:04.202048   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:04.202071   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:04.202079   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:04.202083   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:04.205585   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:04.702350   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:04.702371   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:04.702379   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:04.702383   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:04.706391   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.201695   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:05.201717   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.201724   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.201728   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.206270   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:05.207294   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:05.701694   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:05.701716   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.701727   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.701733   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.705389   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.706154   36539 node_ready.go:49] node "ha-168758-m03" has status "Ready":"True"
	I1204 23:59:05.706172   36539 node_ready.go:38] duration metric: took 18.004918776s for node "ha-168758-m03" to be "Ready" ...
	I1204 23:59:05.706181   36539 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:05.706250   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:05.706260   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.706266   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.706270   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.713324   36539 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 23:59:05.720724   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.720806   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rdbhr
	I1204 23:59:05.720815   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.720829   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.720837   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.724822   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.725428   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.725443   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.725451   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.725455   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.728496   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.729387   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.729410   36539 pod_ready.go:82] duration metric: took 8.658373ms for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.729423   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.729491   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgfvm
	I1204 23:59:05.729504   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.729516   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.729523   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.732217   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.732771   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.732784   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.732791   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.732795   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.735517   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.736112   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.736127   36539 pod_ready.go:82] duration metric: took 6.697955ms for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.736135   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.736189   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758
	I1204 23:59:05.736199   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.736206   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.736209   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.739373   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.740248   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.740263   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.740269   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.740273   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.742492   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.742913   36539 pod_ready.go:93] pod "etcd-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.742932   36539 pod_ready.go:82] duration metric: took 6.791875ms for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.742940   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.742986   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m02
	I1204 23:59:05.742994   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.743000   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.743004   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.745506   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.746152   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:05.746167   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.746173   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.746177   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.748564   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.749064   36539 pod_ready.go:93] pod "etcd-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.749082   36539 pod_ready.go:82] duration metric: took 6.135218ms for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.749094   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.902481   36539 request.go:632] Waited for 153.318139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m03
	I1204 23:59:05.902555   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m03
	I1204 23:59:05.902560   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.902568   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.902576   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.906398   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.102389   36539 request.go:632] Waited for 195.399546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:06.102443   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:06.102448   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.102476   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.102487   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.105971   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.106400   36539 pod_ready.go:93] pod "etcd-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.106416   36539 pod_ready.go:82] duration metric: took 357.316023ms for pod "etcd-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.106432   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.302535   36539 request.go:632] Waited for 196.013383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:59:06.302597   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:59:06.302608   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.302620   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.302626   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.305529   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:06.502522   36539 request.go:632] Waited for 196.356899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:06.502576   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:06.502581   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.502590   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.502598   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.507875   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:59:06.508456   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.508479   36539 pod_ready.go:82] duration metric: took 402.037453ms for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.508493   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.702449   36539 request.go:632] Waited for 193.867804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:59:06.702503   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:59:06.702509   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.702516   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.702521   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.706330   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.902289   36539 request.go:632] Waited for 195.373137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:06.902347   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:06.902355   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.902371   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.902380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.905544   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.906052   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.906069   36539 pod_ready.go:82] duration metric: took 397.563799ms for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.906080   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.102210   36539 request.go:632] Waited for 196.065026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m03
	I1204 23:59:07.102278   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m03
	I1204 23:59:07.102285   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.102298   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.102311   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.105396   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.302017   36539 request.go:632] Waited for 195.70193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:07.302103   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:07.302115   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.302147   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.302157   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.307086   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:07.307589   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:07.307617   36539 pod_ready.go:82] duration metric: took 401.528153ms for pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.307631   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.502581   36539 request.go:632] Waited for 194.865138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:59:07.502634   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:59:07.502639   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.502646   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.502652   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.506125   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.702202   36539 request.go:632] Waited for 195.372404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:07.702281   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:07.702288   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.702295   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.702302   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.705978   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.706658   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:07.706678   36539 pod_ready.go:82] duration metric: took 399.03822ms for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.706694   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.901924   36539 request.go:632] Waited for 195.154365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:59:07.902019   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:59:07.902029   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.902036   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.902039   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.905257   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.102494   36539 request.go:632] Waited for 196.34114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:08.102561   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:08.102566   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.102574   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.102581   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.106643   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:08.107401   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.107420   36539 pod_ready.go:82] duration metric: took 400.717315ms for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.107434   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.302582   36539 request.go:632] Waited for 195.013676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m03
	I1204 23:59:08.302656   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m03
	I1204 23:59:08.302666   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.302679   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.302688   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.308848   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:08.501837   36539 request.go:632] Waited for 192.289915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.501891   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.501918   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.501932   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.501962   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.504980   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:08.505549   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.505567   36539 pod_ready.go:82] duration metric: took 398.125286ms for pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.505577   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9hxpd" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.702722   36539 request.go:632] Waited for 197.087803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hxpd
	I1204 23:59:08.702788   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hxpd
	I1204 23:59:08.702797   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.702805   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.702809   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.706146   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.902503   36539 request.go:632] Waited for 195.802416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.902574   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.902581   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.902588   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.902593   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.905815   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.906362   36539 pod_ready.go:93] pod "kube-proxy-9hxpd" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.906385   36539 pod_ready.go:82] duration metric: took 400.800234ms for pod "kube-proxy-9hxpd" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.906398   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.101673   36539 request.go:632] Waited for 195.212044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:59:09.101755   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:59:09.101767   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.101780   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.101787   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.105084   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.302204   36539 request.go:632] Waited for 196.371294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:09.302277   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:09.302283   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.302290   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.302295   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.305317   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.306001   36539 pod_ready.go:93] pod "kube-proxy-s57v4" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:09.306020   36539 pod_ready.go:82] duration metric: took 399.614816ms for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.306032   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.502424   36539 request.go:632] Waited for 196.313347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:59:09.502475   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:59:09.502483   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.502494   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.502501   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.505899   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.702597   36539 request.go:632] Waited for 196.046362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:09.702661   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:09.702667   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.702675   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.702679   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.706005   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.706501   36539 pod_ready.go:93] pod "kube-proxy-vjhjm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:09.706518   36539 pod_ready.go:82] duration metric: took 400.479582ms for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.706528   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.902149   36539 request.go:632] Waited for 195.548343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:59:09.902203   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:59:09.902208   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.902215   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.902218   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.905375   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.102485   36539 request.go:632] Waited for 196.29891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:10.102549   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:10.102556   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.102567   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.102576   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.105620   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.106279   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.106297   36539 pod_ready.go:82] duration metric: took 399.763414ms for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.106307   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.302480   36539 request.go:632] Waited for 196.111079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:59:10.302561   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:59:10.302570   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.302581   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.302590   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.306007   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.501852   36539 request.go:632] Waited for 195.277553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:10.501917   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:10.501924   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.501957   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.501978   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.505309   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.505892   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.505913   36539 pod_ready.go:82] duration metric: took 399.598555ms for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.505927   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.702021   36539 request.go:632] Waited for 195.988022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m03
	I1204 23:59:10.702085   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m03
	I1204 23:59:10.702093   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.702104   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.702111   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.705373   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.902383   36539 request.go:632] Waited for 196.36776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:10.902455   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:10.902463   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.902473   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.902487   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.905908   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.906376   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.906394   36539 pod_ready.go:82] duration metric: took 400.458551ms for pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.906408   36539 pod_ready.go:39] duration metric: took 5.200215763s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:10.906427   36539 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:59:10.906489   36539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:59:10.925438   36539 api_server.go:72] duration metric: took 23.550742485s to wait for apiserver process to appear ...
	I1204 23:59:10.925468   36539 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:59:10.925490   36539 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1204 23:59:10.929840   36539 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1204 23:59:10.929894   36539 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1204 23:59:10.929901   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.929909   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.929916   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.930671   36539 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 23:59:10.930740   36539 api_server.go:141] control plane version: v1.31.2
	I1204 23:59:10.930758   36539 api_server.go:131] duration metric: took 5.282883ms to wait for apiserver health ...
	I1204 23:59:10.930771   36539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:59:11.102180   36539 request.go:632] Waited for 171.329772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.102245   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.102253   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.102264   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.102275   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.108643   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:11.116294   36539 system_pods.go:59] 24 kube-system pods found
	I1204 23:59:11.116322   36539 system_pods.go:61] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:59:11.116329   36539 system_pods.go:61] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:59:11.116335   36539 system_pods.go:61] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:59:11.116341   36539 system_pods.go:61] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:59:11.116346   36539 system_pods.go:61] "etcd-ha-168758-m03" [9912285e-4d34-4848-9cd7-ebfe1e2fc6b6] Running
	I1204 23:59:11.116350   36539 system_pods.go:61] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:59:11.116356   36539 system_pods.go:61] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:59:11.116361   36539 system_pods.go:61] "kindnet-qzjm2" [8cb0cd7b-1a4f-42f3-ad4b-2bb3f353b1ff] Running
	I1204 23:59:11.116367   36539 system_pods.go:61] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:59:11.116375   36539 system_pods.go:61] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:59:11.116381   36539 system_pods.go:61] "kube-apiserver-ha-168758-m03" [d38eb205-d9e9-422e-9f0a-92b5e27fcf06] Running
	I1204 23:59:11.116388   36539 system_pods.go:61] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:59:11.116399   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:59:11.116406   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m03" [6701fec1-6fd3-4ec0-a269-c55b53c7f17e] Running
	I1204 23:59:11.116415   36539 system_pods.go:61] "kube-proxy-9hxpd" [61053eb8-4fdb-48be-84a1-bb21e54ecfa5] Running
	I1204 23:59:11.116421   36539 system_pods.go:61] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:59:11.116429   36539 system_pods.go:61] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:59:11.116435   36539 system_pods.go:61] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:59:11.116444   36539 system_pods.go:61] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:59:11.116450   36539 system_pods.go:61] "kube-scheduler-ha-168758-m03" [3e543bde-2331-4e7d-8301-d9d2379d3670] Running
	I1204 23:59:11.116458   36539 system_pods.go:61] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:59:11.116464   36539 system_pods.go:61] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:59:11.116472   36539 system_pods.go:61] "kube-vip-ha-168758-m03" [2ca9e0a2-ef37-4420-b138-1093dd69e0d3] Running
	I1204 23:59:11.116480   36539 system_pods.go:61] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:59:11.116490   36539 system_pods.go:74] duration metric: took 185.709136ms to wait for pod list to return data ...
	I1204 23:59:11.116504   36539 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:59:11.302342   36539 request.go:632] Waited for 185.765456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:59:11.302393   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:59:11.302399   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.302405   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.302409   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.305795   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:11.305915   36539 default_sa.go:45] found service account: "default"
	I1204 23:59:11.305932   36539 default_sa.go:55] duration metric: took 189.416579ms for default service account to be created ...
	I1204 23:59:11.305962   36539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:59:11.502392   36539 request.go:632] Waited for 196.360375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.502441   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.502446   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.502454   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.502457   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.509349   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:11.515542   36539 system_pods.go:86] 24 kube-system pods found
	I1204 23:59:11.515567   36539 system_pods.go:89] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:59:11.515572   36539 system_pods.go:89] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:59:11.515576   36539 system_pods.go:89] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:59:11.515580   36539 system_pods.go:89] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:59:11.515584   36539 system_pods.go:89] "etcd-ha-168758-m03" [9912285e-4d34-4848-9cd7-ebfe1e2fc6b6] Running
	I1204 23:59:11.515587   36539 system_pods.go:89] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:59:11.515592   36539 system_pods.go:89] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:59:11.515595   36539 system_pods.go:89] "kindnet-qzjm2" [8cb0cd7b-1a4f-42f3-ad4b-2bb3f353b1ff] Running
	I1204 23:59:11.515599   36539 system_pods.go:89] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:59:11.515603   36539 system_pods.go:89] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:59:11.515607   36539 system_pods.go:89] "kube-apiserver-ha-168758-m03" [d38eb205-d9e9-422e-9f0a-92b5e27fcf06] Running
	I1204 23:59:11.515611   36539 system_pods.go:89] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:59:11.515615   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:59:11.515620   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m03" [6701fec1-6fd3-4ec0-a269-c55b53c7f17e] Running
	I1204 23:59:11.515624   36539 system_pods.go:89] "kube-proxy-9hxpd" [61053eb8-4fdb-48be-84a1-bb21e54ecfa5] Running
	I1204 23:59:11.515627   36539 system_pods.go:89] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:59:11.515630   36539 system_pods.go:89] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:59:11.515633   36539 system_pods.go:89] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:59:11.515636   36539 system_pods.go:89] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:59:11.515639   36539 system_pods.go:89] "kube-scheduler-ha-168758-m03" [3e543bde-2331-4e7d-8301-d9d2379d3670] Running
	I1204 23:59:11.515642   36539 system_pods.go:89] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:59:11.515645   36539 system_pods.go:89] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:59:11.515648   36539 system_pods.go:89] "kube-vip-ha-168758-m03" [2ca9e0a2-ef37-4420-b138-1093dd69e0d3] Running
	I1204 23:59:11.515654   36539 system_pods.go:89] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:59:11.515660   36539 system_pods.go:126] duration metric: took 209.689107ms to wait for k8s-apps to be running ...
	I1204 23:59:11.515669   36539 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:59:11.515708   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:59:11.532158   36539 system_svc.go:56] duration metric: took 16.479991ms WaitForService to wait for kubelet
	I1204 23:59:11.532192   36539 kubeadm.go:582] duration metric: took 24.157502148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:59:11.532215   36539 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:59:11.702618   36539 request.go:632] Waited for 170.317658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1204 23:59:11.702682   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1204 23:59:11.702690   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.702701   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.702708   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.706495   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:11.707279   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707300   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707313   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707318   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707323   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707329   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707335   36539 node_conditions.go:105] duration metric: took 175.114412ms to run NodePressure ...
	I1204 23:59:11.707353   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:59:11.707385   36539 start.go:255] writing updated cluster config ...
	I1204 23:59:11.707662   36539 ssh_runner.go:195] Run: rm -f paused
	I1204 23:59:11.758613   36539 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 23:59:11.760643   36539 out.go:177] * Done! kubectl is now configured to use "ha-168758" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.317117478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=377ce658-e27b-4bfe-bdd7-d0ed5034376d name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.318560444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69822e1b-ae99-4acf-a148-382ea125cf6c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.319061278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356979319039291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69822e1b-ae99-4acf-a148-382ea125cf6c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.319598105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=222a4c74-b0aa-4dcd-89c2-da22eb05d21e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.319664705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=222a4c74-b0aa-4dcd-89c2-da22eb05d21e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.319992550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=222a4c74-b0aa-4dcd-89c2-da22eb05d21e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.355033736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d353d8e8-6a8d-44cd-af40-fb0e2bb1cbef name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.355107880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d353d8e8-6a8d-44cd-af40-fb0e2bb1cbef name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.356317036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cce51736-db5f-49c1-8273-5d0dd6779264 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.357105637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356979357080645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cce51736-db5f-49c1-8273-5d0dd6779264 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.357636118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c87b60d-a72b-4284-aa4e-2f5ae401f497 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.357688775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c87b60d-a72b-4284-aa4e-2f5ae401f497 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.358027453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c87b60d-a72b-4284-aa4e-2f5ae401f497 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.377221218Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ce99cbab-553e-46f8-8628-3c913e52574c name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.377565681Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-qnw8r,Uid:b7d9809d-958e-4df8-8b72-435c46d8030d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356753019126940,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T23:59:12.700590185Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rdbhr,Uid:84a1588a-ae3a-4cf6-b483-eeee3e386743,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1733356613461349496,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84a1588a-ae3a-4cf6-b483-eeee3e386743,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T23:56:53.132089729Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zgfvm,Uid:aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356613451482163,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-12-04T23:56:53.136743852Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356613446858198,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-04T23:56:53.139132829Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&PodSandboxMetadata{Name:kube-proxy-vjhjm,Uid:e00e9325-7046-41b0-ade6-8cac8362e399,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356598018310539,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-12-04T23:56:37.679990231Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&PodSandboxMetadata{Name:kindnet-9l7jr,Uid:b4b65009-de28-48a8-afc3-66d8a0aaf8a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356598004149272,Labels:map[string]string{app: kindnet,controller-revision-hash: 65ddb8b87b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-04T23:56:37.681820018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-168758,Uid:38b79a47425cbd3637dfbec9a8cbddf4,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1733356587522866200,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{kubernetes.io/config.hash: 38b79a47425cbd3637dfbec9a8cbddf4,kubernetes.io/config.seen: 2024-12-04T23:56:27.050071793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-168758,Uid:1f069d7eea6c7110e065e170679d8257,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356587517493558,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1f06
9d7eea6c7110e065e170679d8257,kubernetes.io/config.seen: 2024-12-04T23:56:27.050070896Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-168758,Uid:3e5c96a4dd7f3f8d2d532f784e777953,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356587516376717,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e5c96a4dd7f3f8d2d532f784e777953,kubernetes.io/config.seen: 2024-12-04T23:56:27.050069550Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&PodSandboxMetadata{Name:etcd-ha-168758,Uid:ff4359e8
a62956d9be4ff300e54d2dca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356587497832763,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.246:2379,kubernetes.io/config.hash: ff4359e8a62956d9be4ff300e54d2dca,kubernetes.io/config.seen: 2024-12-04T23:56:27.050072775Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-168758,Uid:216b15c2b5f2b8f51d538046badfbcd1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733356587492794551,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-168758,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.246:8443,kubernetes.io/config.hash: 216b15c2b5f2b8f51d538046badfbcd1,kubernetes.io/config.seen: 2024-12-04T23:56:27.050065735Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ce99cbab-553e-46f8-8628-3c913e52574c name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.378314640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70fef610-b5ba-4455-8056-6399db3dc921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.378368732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70fef610-b5ba-4455-8056-6399db3dc921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.378650957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70fef610-b5ba-4455-8056-6399db3dc921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.395976355Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=735f48ba-5220-4d8b-a8db-99a5419cce94 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.396045186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=735f48ba-5220-4d8b-a8db-99a5419cce94 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.397006839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84d56bf7-cd99-4f6f-b402-b019c2494b07 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.397450903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356979397429654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84d56bf7-cd99-4f6f-b402-b019c2494b07 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.397915987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88bc5682-5d11-4a09-a966-f537f6a52b7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.398016650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88bc5682-5d11-4a09-a966-f537f6a52b7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:02:59 ha-168758 crio[658]: time="2024-12-05 00:02:59.398297836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88bc5682-5d11-4a09-a966-f537f6a52b7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db7ff0ab07801       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a10916ad1046       busybox-7dff88458-qnw8r
	acd6b28e94859       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b695edf511c5e       coredns-7c65d6cfc9-zgfvm
	775689612e70e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b16e3868f6d03       coredns-7c65d6cfc9-rdbhr
	ee81cf4f88af4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   bb748d0b54fa2       storage-provisioner
	4110b2797a4e4       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   f35f03db32634       kindnet-9l7jr
	54fe1d8daa049       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   29cc806b9c066       kube-proxy-vjhjm
	f9cdaa348b12d       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   399ac5ad91348       kube-vip-ha-168758
	f83a840ab3b70       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9e97775967d5d       kube-scheduler-ha-168758
	1729d6a102a14       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   7fa6e474a7667       kube-controller-manager-ha-168758
	9ba9268cc4c55       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c6a8751c9bc08       etcd-ha-168758
	413e537381433       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   05e1ede7541eb       kube-apiserver-ha-168758
	
	
	==> coredns [775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977] <==
	[INFO] 10.244.1.2:49235 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242119s
	[INFO] 10.244.1.2:36162 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00009811s
	[INFO] 10.244.1.2:39283 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118789s
	[INFO] 10.244.2.2:54111 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004220931s
	[INFO] 10.244.2.2:32931 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237164s
	[INFO] 10.244.2.2:41295 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142312s
	[INFO] 10.244.2.2:53943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008473674s
	[INFO] 10.244.2.2:46770 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202477s
	[INFO] 10.244.2.2:55371 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129803s
	[INFO] 10.244.0.4:54472 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128336s
	[INFO] 10.244.0.4:56711 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094635s
	[INFO] 10.244.1.2:59375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133764s
	[INFO] 10.244.1.2:35780 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111638s
	[INFO] 10.244.1.2:34975 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070467s
	[INFO] 10.244.2.2:45753 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154754s
	[INFO] 10.244.2.2:33451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215593s
	[INFO] 10.244.2.2:40356 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085623s
	[INFO] 10.244.0.4:46816 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012269s
	[INFO] 10.244.0.4:47330 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069513s
	[INFO] 10.244.0.4:57640 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114217s
	[INFO] 10.244.1.2:33801 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096362s
	[INFO] 10.244.2.2:37705 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000200759s
	[INFO] 10.244.2.2:42211 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000190547s
	[INFO] 10.244.0.4:33141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134356s
	[INFO] 10.244.0.4:43968 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006677s
	
	
	==> coredns [acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c] <==
	[INFO] 10.244.2.2:49728 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316528s
	[INFO] 10.244.0.4:43307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166964s
	[INFO] 10.244.0.4:56572 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002062823s
	[INFO] 10.244.0.4:59507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000465964s
	[INFO] 10.244.0.4:41018 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122656s
	[INFO] 10.244.0.4:58493 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154857s
	[INFO] 10.244.0.4:52626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075295s
	[INFO] 10.244.1.2:49458 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001666843s
	[INFO] 10.244.1.2:41288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102856s
	[INFO] 10.244.1.2:33580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001685081s
	[INFO] 10.244.1.2:42118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171665s
	[INFO] 10.244.1.2:59523 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108873s
	[INFO] 10.244.2.2:49031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000292612s
	[INFO] 10.244.0.4:56781 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089871s
	[INFO] 10.244.1.2:52724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218204s
	[INFO] 10.244.1.2:48399 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104187s
	[INFO] 10.244.1.2:60140 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087293s
	[INFO] 10.244.2.2:47493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233961s
	[INFO] 10.244.2.2:38060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000321274s
	[INFO] 10.244.0.4:49076 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086072s
	[INFO] 10.244.0.4:47481 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000284893s
	[INFO] 10.244.1.2:55588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252847s
	[INFO] 10.244.1.2:59106 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000257823s
	[INFO] 10.244.1.2:54163 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122038s
	[INFO] 10.244.1.2:39518 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016158s
	
	
	==> describe nodes <==
	Name:               ha-168758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_56_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-168758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cbaecf7e4184dafa60b26978754f7be
	  System UUID:                4cbaecf7-e418-4daf-a60b-26978754f7be
	  Boot ID:                    4bc94051-ae8a-40e7-9636-681faf9bf1fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qnw8r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 coredns-7c65d6cfc9-rdbhr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-zgfvm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-168758                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-9l7jr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m22s
	  kube-system                 kube-apiserver-ha-168758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-168758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-vjhjm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-168758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-168758                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s  kubelet          Node ha-168758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s  kubelet          Node ha-168758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s  kubelet          Node ha-168758 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	  Normal  NodeReady                6m6s   kubelet          Node ha-168758 status is now: NodeReady
	  Normal  RegisteredNode           5m22s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	  Normal  RegisteredNode           4m7s   node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	
	
	Name:               ha-168758-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_57_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:57:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:00:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-168758-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f10b54702f3c44dfba5cac04964ccd96
	  System UUID:                f10b5470-2f3c-44df-ba5c-ac04964ccd96
	  Boot ID:                    33cbfb56-c30f-45e3-b72b-7b95c35b9817
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vzls9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-168758-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m30s
	  kube-system                 kindnet-f4s96                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m31s
	  kube-system                 kube-apiserver-ha-168758-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-168758-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-s57v4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-scheduler-ha-168758-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-vip-ha-168758-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  Starting                 5m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m31s (x2 over 5m31s)  kubelet          Node ha-168758-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s (x2 over 5m31s)  kubelet          Node ha-168758-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s (x2 over 5m31s)  kubelet          Node ha-168758-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  NodeReady                5m9s                   kubelet          Node ha-168758-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  NodeNotReady             112s                   node-controller  Node ha-168758-m02 status is now: NodeNotReady
	
	
	Name:               ha-168758-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_58_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:58:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:59:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    ha-168758-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45e57e3f54964fb3a3aeed03051209b3
	  System UUID:                45e57e3f-5496-4fb3-a3ae-ed03051209b3
	  Boot ID:                    8e17ecfe-4d6f-4380-b2ec-d58e3528bee5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-brzwz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-168758-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m14s
	  kube-system                 kindnet-qzjm2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m16s
	  kube-system                 kube-apiserver-ha-168758-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-ha-168758-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-9hxpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-scheduler-ha-168758-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-168758-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m16s (x8 over 4m16s)  kubelet          Node ha-168758-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s (x8 over 4m16s)  kubelet          Node ha-168758-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s (x7 over 4m16s)  kubelet          Node ha-168758-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	
	
	Name:               ha-168758-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_59_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:59:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 00:00:17 +0000   Thu, 05 Dec 2024 00:00:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ha-168758-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 52df6477abaa40eb8464d395e2a11a95
	  System UUID:                52df6477-abaa-40eb-8464-d395e2a11a95
	  Boot ID:                    af84ed2d-18aa-4219-8f2e-d7ea7a1e76fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-964c5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-6r2gb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m13s)  kubelet          Node ha-168758-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m13s)  kubelet          Node ha-168758-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m13s)  kubelet          Node ha-168758-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-168758-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 4 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037313] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 4 23:56] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.965232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.552645] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.572491] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.058309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061191] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.184233] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.102774] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.251131] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +3.779101] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.425619] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.056248] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.972412] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.078582] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.128288] kauditd_printk_skb: 33 callbacks suppressed
	[ +14.702984] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 4 23:57] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8] <==
	{"level":"warn","ts":"2024-12-05T00:02:59.603790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.648290Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.656900Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.658332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.668030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.673034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.689245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.698545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.707579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.711901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.715427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.720709Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.727523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.734066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.738080Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.740901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.747360Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.747489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.753263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.759248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.763337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.766083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.769790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.775889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:02:59.782244Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:02:59 up 7 min,  0 users,  load average: 0.23, 0.35, 0.20
	Linux ha-168758 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302] <==
	I1205 00:02:22.983493       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:32.975709       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:32.975834       1 main.go:301] handling current node
	I1205 00:02:32.975867       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:32.975885       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:32.976196       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:32.976232       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:32.976329       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:32.976349       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:42.976549       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:42.977185       1 main.go:301] handling current node
	I1205 00:02:42.977331       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:42.977368       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:42.977798       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:42.977846       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:42.978121       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:42.978436       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:52.983049       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:52.983086       1 main.go:301] handling current node
	I1205 00:02:52.983099       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:52.983104       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:52.983347       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:52.983354       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:52.983479       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:52.983484       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619] <==
	W1204 23:56:31.867915       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I1204 23:56:31.869149       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:56:31.873256       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 23:56:32.153052       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 23:56:33.976231       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 23:56:33.989474       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 23:56:34.001309       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 23:56:37.651703       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1204 23:56:37.764778       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1204 23:59:17.616400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33356: use of closed network connection
	E1204 23:59:17.796618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33360: use of closed network connection
	E1204 23:59:17.975175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33372: use of closed network connection
	E1204 23:59:18.159453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33388: use of closed network connection
	E1204 23:59:18.340465       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33408: use of closed network connection
	E1204 23:59:18.525899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33424: use of closed network connection
	E1204 23:59:18.699204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33448: use of closed network connection
	E1204 23:59:18.871260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33456: use of closed network connection
	E1204 23:59:19.044471       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33468: use of closed network connection
	E1204 23:59:19.324768       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33494: use of closed network connection
	E1204 23:59:19.503485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33512: use of closed network connection
	E1204 23:59:19.681162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33532: use of closed network connection
	E1204 23:59:19.864655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33548: use of closed network connection
	E1204 23:59:20.038197       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33568: use of closed network connection
	E1204 23:59:20.209773       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33584: use of closed network connection
	W1205 00:00:41.898434       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.211 192.168.39.246]
	
	
	==> kube-controller-manager [1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3] <==
	E1204 23:59:46.495910       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2xb9f failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2xb9f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1204 23:59:46.913622       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-168758-m04\" does not exist"
	I1204 23:59:46.947226       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-168758-m04" podCIDRs=["10.244.3.0/24"]
	I1204 23:59:46.947330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:46.947390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:46.951979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.179648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.195594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.195618       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-168758-m04"
	I1204 23:59:47.217355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.297136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.621862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:57.226529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:06.702516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:06.703393       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-168758-m04"
	I1205 00:00:06.718143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:07.112438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:17.922402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:01:07.143195       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-168758-m04"
	I1205 00:01:07.144088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.170589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.248294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.276459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.988319ms"
	I1205 00:01:07.277110       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.518µs"
	I1205 00:01:12.432512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	
	
	==> kube-proxy [54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:56:38.767631       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:56:38.783503       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	E1204 23:56:38.783650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:56:38.826822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:56:38.826910       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:56:38.827013       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:56:38.829514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:56:38.830207       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:56:38.830274       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:56:38.832929       1 config.go:199] "Starting service config controller"
	I1204 23:56:38.833423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:56:38.833660       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:56:38.833695       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:56:38.836133       1 config.go:328] "Starting node config controller"
	I1204 23:56:38.836155       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:56:38.934569       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:56:38.934627       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:56:38.936209       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579] <==
	W1204 23:56:30.307297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:30.307486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:30.307308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:30.307579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.211435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 23:56:31.211491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.247731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:56:31.247775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.414104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:31.414215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.426986       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:56:31.427111       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 23:56:31.430415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:31.430490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.446966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:56:31.447114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.462015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:56:31.462105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.482639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 23:56:31.482672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1204 23:56:33.294482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1204 23:58:43.853823       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qzjm2\": pod kindnet-qzjm2 is already assigned to node \"ha-168758-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qzjm2" node="ha-168758-m03"
	E1204 23:58:43.856189       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qzjm2\": pod kindnet-qzjm2 is already assigned to node \"ha-168758-m03\"" pod="kube-system/kindnet-qzjm2"
	E1204 23:59:46.998972       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6r2gb\": pod kube-proxy-6r2gb is already assigned to node \"ha-168758-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6r2gb" node="ha-168758-m04"
	E1204 23:59:46.999075       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6r2gb\": pod kube-proxy-6r2gb is already assigned to node \"ha-168758-m04\"" pod="kube-system/kube-proxy-6r2gb"
	
	
	==> kubelet <==
	Dec 05 00:01:33 ha-168758 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 00:01:33 ha-168758 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 00:01:34 ha-168758 kubelet[1298]: E1205 00:01:34.017719    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356894017416921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:34 ha-168758 kubelet[1298]: E1205 00:01:34.017769    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356894017416921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:44 ha-168758 kubelet[1298]: E1205 00:01:44.019770    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356904019230886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:44 ha-168758 kubelet[1298]: E1205 00:01:44.020092    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356904019230886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:54 ha-168758 kubelet[1298]: E1205 00:01:54.022125    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356914021750332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:54 ha-168758 kubelet[1298]: E1205 00:01:54.022148    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356914021750332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:04 ha-168758 kubelet[1298]: E1205 00:02:04.026278    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356924025030004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:04 ha-168758 kubelet[1298]: E1205 00:02:04.026320    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356924025030004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:14 ha-168758 kubelet[1298]: E1205 00:02:14.029261    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356934028834031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:14 ha-168758 kubelet[1298]: E1205 00:02:14.029299    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356934028834031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:24 ha-168758 kubelet[1298]: E1205 00:02:24.032005    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356944031328211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:24 ha-168758 kubelet[1298]: E1205 00:02:24.032327    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356944031328211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:33 ha-168758 kubelet[1298]: E1205 00:02:33.944974    1298 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 00:02:33 ha-168758 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 00:02:34 ha-168758 kubelet[1298]: E1205 00:02:34.034516    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356954034156458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:34 ha-168758 kubelet[1298]: E1205 00:02:34.034558    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356954034156458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:44 ha-168758 kubelet[1298]: E1205 00:02:44.037075    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356964036447971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:44 ha-168758 kubelet[1298]: E1205 00:02:44.037120    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356964036447971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:54 ha-168758 kubelet[1298]: E1205 00:02:54.039608    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356974039135405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:54 ha-168758 kubelet[1298]: E1205 00:02:54.039657    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356974039135405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-168758 -n ha-168758
helpers_test.go:261: (dbg) Run:  kubectl --context ha-168758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.054323782s)
ha_test.go:309: expected profile "ha-168758" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-168758\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-168758\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-168758\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.246\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.70\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.211\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.118\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\
"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-168758 -n ha-168758
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 logs -n 25: (1.348053334s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m03_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m04 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp testdata/cp-test.txt                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m04_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03:/home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m03 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-168758 node stop m02 -v=7                                                     | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-168758 node start m02 -v=7                                                    | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:55:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:55:54.691284   36539 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:55:54.691414   36539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:54.691424   36539 out.go:358] Setting ErrFile to fd 2...
	I1204 23:55:54.691432   36539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:54.691628   36539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1204 23:55:54.692240   36539 out.go:352] Setting JSON to false
	I1204 23:55:54.693153   36539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5901,"bootTime":1733350654,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:55:54.693247   36539 start.go:139] virtualization: kvm guest
	I1204 23:55:54.695552   36539 out.go:177] * [ha-168758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:55:54.696955   36539 notify.go:220] Checking for updates...
	I1204 23:55:54.696986   36539 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:55:54.698203   36539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:55:54.699583   36539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:55:54.700898   36539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:54.702193   36539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:55:54.703640   36539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:55:54.705053   36539 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:55:54.739286   36539 out.go:177] * Using the kvm2 driver based on user configuration
	I1204 23:55:54.740386   36539 start.go:297] selected driver: kvm2
	I1204 23:55:54.740398   36539 start.go:901] validating driver "kvm2" against <nil>
	I1204 23:55:54.740409   36539 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:55:54.741134   36539 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:55:54.741212   36539 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 23:55:54.755884   36539 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 23:55:54.755947   36539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:55:54.756215   36539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:55:54.756246   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:55:54.756288   36539 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 23:55:54.756296   36539 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:55:54.756346   36539 start.go:340] cluster config:
	{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1204 23:55:54.756434   36539 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:55:54.758105   36539 out.go:177] * Starting "ha-168758" primary control-plane node in "ha-168758" cluster
	I1204 23:55:54.759313   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:55:54.759357   36539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:55:54.759363   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:55:54.759483   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:55:54.759495   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:55:54.759812   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:55:54.759833   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json: {Name:mk6e7aaeeb71d483ab659741dd4a36c7ed0cf735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:55:54.759956   36539 start.go:360] acquireMachinesLock for ha-168758: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:55:54.759986   36539 start.go:364] duration metric: took 15.99µs to acquireMachinesLock for "ha-168758"
	I1204 23:55:54.759999   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:55:54.760065   36539 start.go:125] createHost starting for "" (driver="kvm2")
	I1204 23:55:54.761682   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:55:54.761785   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:55:54.761819   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:55:54.775974   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I1204 23:55:54.776416   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:55:54.776960   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:55:54.776978   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:55:54.777327   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:55:54.777500   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:55:54.777669   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:55:54.777814   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:55:54.777843   36539 client.go:168] LocalClient.Create starting
	I1204 23:55:54.777875   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:55:54.777915   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:55:54.777928   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:55:54.778006   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:55:54.778033   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:55:54.778049   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:55:54.778066   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:55:54.778079   36539 main.go:141] libmachine: (ha-168758) Calling .PreCreateCheck
	I1204 23:55:54.778456   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:55:54.778813   36539 main.go:141] libmachine: Creating machine...
	I1204 23:55:54.778825   36539 main.go:141] libmachine: (ha-168758) Calling .Create
	I1204 23:55:54.778947   36539 main.go:141] libmachine: (ha-168758) Creating KVM machine...
	I1204 23:55:54.780063   36539 main.go:141] libmachine: (ha-168758) DBG | found existing default KVM network
	I1204 23:55:54.780869   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:54.780727   36562 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1204 23:55:54.780949   36539 main.go:141] libmachine: (ha-168758) DBG | created network xml: 
	I1204 23:55:54.780981   36539 main.go:141] libmachine: (ha-168758) DBG | <network>
	I1204 23:55:54.780994   36539 main.go:141] libmachine: (ha-168758) DBG |   <name>mk-ha-168758</name>
	I1204 23:55:54.781002   36539 main.go:141] libmachine: (ha-168758) DBG |   <dns enable='no'/>
	I1204 23:55:54.781007   36539 main.go:141] libmachine: (ha-168758) DBG |   
	I1204 23:55:54.781012   36539 main.go:141] libmachine: (ha-168758) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1204 23:55:54.781017   36539 main.go:141] libmachine: (ha-168758) DBG |     <dhcp>
	I1204 23:55:54.781022   36539 main.go:141] libmachine: (ha-168758) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1204 23:55:54.781030   36539 main.go:141] libmachine: (ha-168758) DBG |     </dhcp>
	I1204 23:55:54.781067   36539 main.go:141] libmachine: (ha-168758) DBG |   </ip>
	I1204 23:55:54.781085   36539 main.go:141] libmachine: (ha-168758) DBG |   
	I1204 23:55:54.781094   36539 main.go:141] libmachine: (ha-168758) DBG | </network>
	I1204 23:55:54.781107   36539 main.go:141] libmachine: (ha-168758) DBG | 
	I1204 23:55:54.786047   36539 main.go:141] libmachine: (ha-168758) DBG | trying to create private KVM network mk-ha-168758 192.168.39.0/24...
	I1204 23:55:54.850461   36539 main.go:141] libmachine: (ha-168758) DBG | private KVM network mk-ha-168758 192.168.39.0/24 created
	I1204 23:55:54.850492   36539 main.go:141] libmachine: (ha-168758) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 ...
	I1204 23:55:54.850520   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:54.850437   36562 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:54.850537   36539 main.go:141] libmachine: (ha-168758) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:55:54.850557   36539 main.go:141] libmachine: (ha-168758) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:55:55.110330   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.110147   36562 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa...
	I1204 23:55:55.305236   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.305108   36562 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/ha-168758.rawdisk...
	I1204 23:55:55.305287   36539 main.go:141] libmachine: (ha-168758) DBG | Writing magic tar header
	I1204 23:55:55.305304   36539 main.go:141] libmachine: (ha-168758) DBG | Writing SSH key tar header
	I1204 23:55:55.305328   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:55.305248   36562 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 ...
	I1204 23:55:55.305350   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758
	I1204 23:55:55.305432   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758 (perms=drwx------)
	I1204 23:55:55.305462   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:55:55.305478   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:55:55.305493   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:55:55.305502   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:55:55.305511   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:55:55.305522   36539 main.go:141] libmachine: (ha-168758) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:55:55.305536   36539 main.go:141] libmachine: (ha-168758) Creating domain...
	I1204 23:55:55.305552   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:55.305567   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:55:55.305572   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:55:55.305581   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:55:55.305588   36539 main.go:141] libmachine: (ha-168758) DBG | Checking permissions on dir: /home
	I1204 23:55:55.305594   36539 main.go:141] libmachine: (ha-168758) DBG | Skipping /home - not owner
	I1204 23:55:55.306542   36539 main.go:141] libmachine: (ha-168758) define libvirt domain using xml: 
	I1204 23:55:55.306560   36539 main.go:141] libmachine: (ha-168758) <domain type='kvm'>
	I1204 23:55:55.306569   36539 main.go:141] libmachine: (ha-168758)   <name>ha-168758</name>
	I1204 23:55:55.306577   36539 main.go:141] libmachine: (ha-168758)   <memory unit='MiB'>2200</memory>
	I1204 23:55:55.306585   36539 main.go:141] libmachine: (ha-168758)   <vcpu>2</vcpu>
	I1204 23:55:55.306593   36539 main.go:141] libmachine: (ha-168758)   <features>
	I1204 23:55:55.306598   36539 main.go:141] libmachine: (ha-168758)     <acpi/>
	I1204 23:55:55.306605   36539 main.go:141] libmachine: (ha-168758)     <apic/>
	I1204 23:55:55.306610   36539 main.go:141] libmachine: (ha-168758)     <pae/>
	I1204 23:55:55.306624   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.306651   36539 main.go:141] libmachine: (ha-168758)   </features>
	I1204 23:55:55.306675   36539 main.go:141] libmachine: (ha-168758)   <cpu mode='host-passthrough'>
	I1204 23:55:55.306682   36539 main.go:141] libmachine: (ha-168758)   
	I1204 23:55:55.306690   36539 main.go:141] libmachine: (ha-168758)   </cpu>
	I1204 23:55:55.306700   36539 main.go:141] libmachine: (ha-168758)   <os>
	I1204 23:55:55.306707   36539 main.go:141] libmachine: (ha-168758)     <type>hvm</type>
	I1204 23:55:55.306714   36539 main.go:141] libmachine: (ha-168758)     <boot dev='cdrom'/>
	I1204 23:55:55.306724   36539 main.go:141] libmachine: (ha-168758)     <boot dev='hd'/>
	I1204 23:55:55.306733   36539 main.go:141] libmachine: (ha-168758)     <bootmenu enable='no'/>
	I1204 23:55:55.306742   36539 main.go:141] libmachine: (ha-168758)   </os>
	I1204 23:55:55.306751   36539 main.go:141] libmachine: (ha-168758)   <devices>
	I1204 23:55:55.306763   36539 main.go:141] libmachine: (ha-168758)     <disk type='file' device='cdrom'>
	I1204 23:55:55.306773   36539 main.go:141] libmachine: (ha-168758)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/boot2docker.iso'/>
	I1204 23:55:55.306798   36539 main.go:141] libmachine: (ha-168758)       <target dev='hdc' bus='scsi'/>
	I1204 23:55:55.306817   36539 main.go:141] libmachine: (ha-168758)       <readonly/>
	I1204 23:55:55.306827   36539 main.go:141] libmachine: (ha-168758)     </disk>
	I1204 23:55:55.306836   36539 main.go:141] libmachine: (ha-168758)     <disk type='file' device='disk'>
	I1204 23:55:55.306848   36539 main.go:141] libmachine: (ha-168758)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:55:55.306863   36539 main.go:141] libmachine: (ha-168758)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/ha-168758.rawdisk'/>
	I1204 23:55:55.306877   36539 main.go:141] libmachine: (ha-168758)       <target dev='hda' bus='virtio'/>
	I1204 23:55:55.306888   36539 main.go:141] libmachine: (ha-168758)     </disk>
	I1204 23:55:55.306895   36539 main.go:141] libmachine: (ha-168758)     <interface type='network'>
	I1204 23:55:55.306907   36539 main.go:141] libmachine: (ha-168758)       <source network='mk-ha-168758'/>
	I1204 23:55:55.306917   36539 main.go:141] libmachine: (ha-168758)       <model type='virtio'/>
	I1204 23:55:55.306926   36539 main.go:141] libmachine: (ha-168758)     </interface>
	I1204 23:55:55.306935   36539 main.go:141] libmachine: (ha-168758)     <interface type='network'>
	I1204 23:55:55.306944   36539 main.go:141] libmachine: (ha-168758)       <source network='default'/>
	I1204 23:55:55.306958   36539 main.go:141] libmachine: (ha-168758)       <model type='virtio'/>
	I1204 23:55:55.306969   36539 main.go:141] libmachine: (ha-168758)     </interface>
	I1204 23:55:55.306980   36539 main.go:141] libmachine: (ha-168758)     <serial type='pty'>
	I1204 23:55:55.306986   36539 main.go:141] libmachine: (ha-168758)       <target port='0'/>
	I1204 23:55:55.306993   36539 main.go:141] libmachine: (ha-168758)     </serial>
	I1204 23:55:55.307001   36539 main.go:141] libmachine: (ha-168758)     <console type='pty'>
	I1204 23:55:55.307011   36539 main.go:141] libmachine: (ha-168758)       <target type='serial' port='0'/>
	I1204 23:55:55.307022   36539 main.go:141] libmachine: (ha-168758)     </console>
	I1204 23:55:55.307036   36539 main.go:141] libmachine: (ha-168758)     <rng model='virtio'>
	I1204 23:55:55.307049   36539 main.go:141] libmachine: (ha-168758)       <backend model='random'>/dev/random</backend>
	I1204 23:55:55.307058   36539 main.go:141] libmachine: (ha-168758)     </rng>
	I1204 23:55:55.307065   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.307074   36539 main.go:141] libmachine: (ha-168758)     
	I1204 23:55:55.307081   36539 main.go:141] libmachine: (ha-168758)   </devices>
	I1204 23:55:55.307091   36539 main.go:141] libmachine: (ha-168758) </domain>
	I1204 23:55:55.307100   36539 main.go:141] libmachine: (ha-168758) 
	I1204 23:55:55.311286   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:cd:a8:a7 in network default
	I1204 23:55:55.311801   36539 main.go:141] libmachine: (ha-168758) Ensuring networks are active...
	I1204 23:55:55.311821   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:55.312494   36539 main.go:141] libmachine: (ha-168758) Ensuring network default is active
	I1204 23:55:55.312741   36539 main.go:141] libmachine: (ha-168758) Ensuring network mk-ha-168758 is active
	I1204 23:55:55.313181   36539 main.go:141] libmachine: (ha-168758) Getting domain xml...
	I1204 23:55:55.313756   36539 main.go:141] libmachine: (ha-168758) Creating domain...
	I1204 23:55:56.499396   36539 main.go:141] libmachine: (ha-168758) Waiting to get IP...
	I1204 23:55:56.500076   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:56.500499   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:56.500544   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:56.500496   36562 retry.go:31] will retry after 269.735879ms: waiting for machine to come up
	I1204 23:55:56.772123   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:56.772694   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:56.772717   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:56.772654   36562 retry.go:31] will retry after 276.504012ms: waiting for machine to come up
	I1204 23:55:57.051105   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:57.051610   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:57.051639   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:57.051569   36562 retry.go:31] will retry after 401.653592ms: waiting for machine to come up
	I1204 23:55:57.455271   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:57.455618   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:57.455645   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:57.455573   36562 retry.go:31] will retry after 574.501264ms: waiting for machine to come up
	I1204 23:55:58.031181   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:58.031549   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:58.031571   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:58.031508   36562 retry.go:31] will retry after 481.845874ms: waiting for machine to come up
	I1204 23:55:58.515212   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:58.515533   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:58.515555   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:58.515482   36562 retry.go:31] will retry after 665.524951ms: waiting for machine to come up
	I1204 23:55:59.182393   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:55:59.182797   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:55:59.182828   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:55:59.182761   36562 retry.go:31] will retry after 904.131177ms: waiting for machine to come up
	I1204 23:56:00.088207   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:00.088652   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:00.088679   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:00.088622   36562 retry.go:31] will retry after 1.17453888s: waiting for machine to come up
	I1204 23:56:01.265172   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:01.265600   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:01.265628   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:01.265549   36562 retry.go:31] will retry after 1.695149003s: waiting for machine to come up
	I1204 23:56:02.963009   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:02.963469   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:02.963496   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:02.963425   36562 retry.go:31] will retry after 1.502018093s: waiting for machine to come up
	I1204 23:56:04.467087   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:04.467517   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:04.467540   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:04.467461   36562 retry.go:31] will retry after 2.001332834s: waiting for machine to come up
	I1204 23:56:06.470836   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:06.471294   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:06.471318   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:06.471260   36562 retry.go:31] will retry after 3.491074143s: waiting for machine to come up
	I1204 23:56:09.963576   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:09.964086   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:09.964117   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:09.964017   36562 retry.go:31] will retry after 3.362007605s: waiting for machine to come up
	I1204 23:56:13.329557   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:13.329894   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find current IP address of domain ha-168758 in network mk-ha-168758
	I1204 23:56:13.329919   36539 main.go:141] libmachine: (ha-168758) DBG | I1204 23:56:13.329871   36562 retry.go:31] will retry after 3.508639528s: waiting for machine to come up
	I1204 23:56:16.841192   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.841632   36539 main.go:141] libmachine: (ha-168758) Found IP for machine: 192.168.39.246
	I1204 23:56:16.841646   36539 main.go:141] libmachine: (ha-168758) Reserving static IP address...
	I1204 23:56:16.841659   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has current primary IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.842065   36539 main.go:141] libmachine: (ha-168758) DBG | unable to find host DHCP lease matching {name: "ha-168758", mac: "52:54:00:e5:a0:85", ip: "192.168.39.246"} in network mk-ha-168758
	I1204 23:56:16.911192   36539 main.go:141] libmachine: (ha-168758) DBG | Getting to WaitForSSH function...
	I1204 23:56:16.911218   36539 main.go:141] libmachine: (ha-168758) Reserved static IP address: 192.168.39.246
	I1204 23:56:16.911227   36539 main.go:141] libmachine: (ha-168758) Waiting for SSH to be available...
	I1204 23:56:16.913359   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.913692   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:16.913721   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:16.913806   36539 main.go:141] libmachine: (ha-168758) DBG | Using SSH client type: external
	I1204 23:56:16.913831   36539 main.go:141] libmachine: (ha-168758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa (-rw-------)
	I1204 23:56:16.913874   36539 main.go:141] libmachine: (ha-168758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:56:16.913895   36539 main.go:141] libmachine: (ha-168758) DBG | About to run SSH command:
	I1204 23:56:16.913909   36539 main.go:141] libmachine: (ha-168758) DBG | exit 0
	I1204 23:56:17.042024   36539 main.go:141] libmachine: (ha-168758) DBG | SSH cmd err, output: <nil>: 
	I1204 23:56:17.042323   36539 main.go:141] libmachine: (ha-168758) KVM machine creation complete!
	I1204 23:56:17.042711   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:56:17.043239   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:17.043439   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:17.043673   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:56:17.043690   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:17.045108   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:56:17.045123   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:56:17.045129   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:56:17.045137   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.047586   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.047979   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.048020   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.048171   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.048340   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.048499   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.048610   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.048758   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.048940   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.048952   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:56:17.161079   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:56:17.161103   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:56:17.161111   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.163996   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.164353   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.164381   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.164502   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.164678   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.164819   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.164953   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.165084   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.165251   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.165266   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:56:17.278333   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:56:17.278413   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:56:17.278424   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:56:17.278431   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.278638   36539 buildroot.go:166] provisioning hostname "ha-168758"
	I1204 23:56:17.278665   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.278846   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.281326   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.281670   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.281694   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.281868   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.282066   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.282192   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.282295   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.282427   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.282619   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.282636   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758 && echo "ha-168758" | sudo tee /etc/hostname
	I1204 23:56:17.407414   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758
	
	I1204 23:56:17.407442   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.410140   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.410515   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.410537   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.410696   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.410903   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.411049   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.411179   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.411328   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.411496   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.411510   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:56:17.529969   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:56:17.530000   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:56:17.530033   36539 buildroot.go:174] setting up certificates
	I1204 23:56:17.530051   36539 provision.go:84] configureAuth start
	I1204 23:56:17.530060   36539 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1204 23:56:17.530320   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:17.533080   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.533538   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.533566   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.533710   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.535938   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.536275   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.536304   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.536461   36539 provision.go:143] copyHostCerts
	I1204 23:56:17.536485   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:56:17.536522   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:56:17.536530   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:56:17.536603   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:56:17.536688   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:56:17.536713   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:56:17.536717   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:56:17.536740   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:56:17.536823   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:56:17.536843   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:56:17.536847   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:56:17.536869   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:56:17.536930   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758 san=[127.0.0.1 192.168.39.246 ha-168758 localhost minikube]
	I1204 23:56:17.711262   36539 provision.go:177] copyRemoteCerts
	I1204 23:56:17.711322   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:56:17.711342   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.714054   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.714497   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.714526   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.714715   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.714918   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.715121   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.715311   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:17.799635   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:56:17.799713   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:56:17.821716   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:56:17.821774   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 23:56:17.843183   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:56:17.843274   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:56:17.864534   36539 provision.go:87] duration metric: took 334.473115ms to configureAuth
	I1204 23:56:17.864557   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:56:17.864710   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:17.864770   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:17.867187   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.867580   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:17.867600   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:17.867756   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:17.867942   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.868106   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:17.868250   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:17.868399   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:17.868552   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:17.868568   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:56:18.094883   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:56:18.094906   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:56:18.094924   36539 main.go:141] libmachine: (ha-168758) Calling .GetURL
	I1204 23:56:18.096038   36539 main.go:141] libmachine: (ha-168758) DBG | Using libvirt version 6000000
	I1204 23:56:18.097970   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.098244   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.098281   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.098435   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:56:18.098456   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:56:18.098466   36539 client.go:171] duration metric: took 23.320611985s to LocalClient.Create
	I1204 23:56:18.098496   36539 start.go:167] duration metric: took 23.320681504s to libmachine.API.Create "ha-168758"
	I1204 23:56:18.098512   36539 start.go:293] postStartSetup for "ha-168758" (driver="kvm2")
	I1204 23:56:18.098543   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:56:18.098564   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.098810   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:56:18.098838   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.100934   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.101269   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.101300   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.101384   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.101557   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.101728   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.101868   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.187794   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:56:18.191682   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:56:18.191702   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:56:18.191762   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:56:18.191831   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:56:18.191844   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:56:18.191928   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:56:18.200599   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:56:18.221903   36539 start.go:296] duration metric: took 123.376466ms for postStartSetup
	I1204 23:56:18.221953   36539 main.go:141] libmachine: (ha-168758) Calling .GetConfigRaw
	I1204 23:56:18.222479   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:18.225322   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.225603   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.225629   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.225822   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:18.226039   36539 start.go:128] duration metric: took 23.465963847s to createHost
	I1204 23:56:18.226063   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.227948   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.228242   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.228279   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.228403   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.228611   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.228757   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.228867   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.229008   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:56:18.229167   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1204 23:56:18.229181   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:56:18.342221   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356578.318371600
	
	I1204 23:56:18.342254   36539 fix.go:216] guest clock: 1733356578.318371600
	I1204 23:56:18.342262   36539 fix.go:229] Guest: 2024-12-04 23:56:18.3183716 +0000 UTC Remote: 2024-12-04 23:56:18.226051425 +0000 UTC m=+23.570737200 (delta=92.320175ms)
	I1204 23:56:18.342279   36539 fix.go:200] guest clock delta is within tolerance: 92.320175ms
	I1204 23:56:18.342284   36539 start.go:83] releasing machines lock for "ha-168758", held for 23.582292191s
	I1204 23:56:18.342302   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.342572   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:18.345220   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.345537   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.345566   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.345727   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346174   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346341   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:18.346431   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:56:18.346466   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.346494   36539 ssh_runner.go:195] Run: cat /version.json
	I1204 23:56:18.346512   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:18.348977   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349188   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349347   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.349382   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349542   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:18.349563   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:18.349567   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.349682   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.349723   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:18.349810   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.349858   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:18.349927   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.349985   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:18.350095   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:18.458516   36539 ssh_runner.go:195] Run: systemctl --version
	I1204 23:56:18.464121   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:56:18.616560   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:56:18.621973   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:56:18.622057   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:56:18.637008   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:56:18.637037   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:56:18.637115   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:56:18.652474   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:56:18.665368   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:56:18.665440   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:56:18.678285   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:56:18.691433   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:56:18.809520   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:56:18.976690   36539 docker.go:233] disabling docker service ...
	I1204 23:56:18.976760   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:56:18.990927   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:56:19.002936   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:56:19.109515   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:56:19.215051   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:56:19.228643   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:56:19.245611   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:56:19.245668   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.255147   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:56:19.255206   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.264826   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.274125   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.283553   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:56:19.293134   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.302638   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.318465   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:56:19.327811   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:56:19.336218   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:56:19.336258   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:56:19.347830   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:56:19.356252   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:56:19.465000   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:56:19.556969   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:56:19.557036   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:56:19.561493   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:56:19.561547   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:56:19.564952   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:56:19.603625   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:56:19.603715   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:56:19.633677   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:56:19.660728   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:56:19.661971   36539 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1204 23:56:19.664503   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:19.664856   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:19.664875   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:19.665065   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:56:19.668903   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:56:19.681192   36539 kubeadm.go:883] updating cluster {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:56:19.681288   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:56:19.681341   36539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:56:19.712021   36539 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1204 23:56:19.712076   36539 ssh_runner.go:195] Run: which lz4
	I1204 23:56:19.715771   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1204 23:56:19.715868   36539 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 23:56:19.719604   36539 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 23:56:19.719630   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1204 23:56:20.924241   36539 crio.go:462] duration metric: took 1.208402192s to copy over tarball
	I1204 23:56:20.924314   36539 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 23:56:22.898491   36539 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.974144098s)
	I1204 23:56:22.898524   36539 crio.go:469] duration metric: took 1.974249374s to extract the tarball
	I1204 23:56:22.898533   36539 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 23:56:22.934474   36539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:56:22.976510   36539 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:56:22.976531   36539 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:56:22.976539   36539 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.2 crio true true} ...
	I1204 23:56:22.976632   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:56:22.976690   36539 ssh_runner.go:195] Run: crio config
	I1204 23:56:23.020705   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:56:23.020725   36539 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 23:56:23.020733   36539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:56:23.020757   36539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-168758 NodeName:ha-168758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:56:23.020887   36539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-168758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:56:23.020918   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:56:23.020968   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:56:23.036384   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:56:23.036487   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:56:23.036538   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:56:23.045918   36539 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:56:23.045983   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 23:56:23.054649   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1204 23:56:23.069725   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:56:23.085152   36539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1204 23:56:23.100420   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1204 23:56:23.115477   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:56:23.118939   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:56:23.130202   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:56:23.249727   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:56:23.265905   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.246
	I1204 23:56:23.265923   36539 certs.go:194] generating shared ca certs ...
	I1204 23:56:23.265936   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.266102   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:56:23.266137   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:56:23.266147   36539 certs.go:256] generating profile certs ...
	I1204 23:56:23.266191   36539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:56:23.266210   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt with IP's: []
	I1204 23:56:23.454121   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt ...
	I1204 23:56:23.454147   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt: {Name:mk99d8a201f862390e96b084b532473bf8d8aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.454316   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key ...
	I1204 23:56:23.454328   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key: {Name:mkfdc708535eba27b71f019b2247ebd8ebc65dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.454403   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a
	I1204 23:56:23.454418   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I1204 23:56:23.571650   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a ...
	I1204 23:56:23.571677   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a: {Name:mk2304c3cee7d491c51de92409bc18bd5dd4ae6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.571829   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a ...
	I1204 23:56:23.571841   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a: {Name:mkd75785818ce3fa6f4d0786b8ae41ba1de41627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.571908   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.99c4c47a -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:56:23.572007   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.99c4c47a -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:56:23.572063   36539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:56:23.572077   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt with IP's: []
	I1204 23:56:23.673268   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt ...
	I1204 23:56:23.673293   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt: {Name:mkd46a8ffbcb119ca858529996fc89416032f07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.673434   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key ...
	I1204 23:56:23.673444   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key: {Name:mk7fc42a81c367eaf7af5377c2050677c25e0a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:23.673507   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:56:23.673523   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:56:23.673534   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:56:23.673547   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:56:23.673561   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:56:23.673574   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:56:23.673586   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:56:23.673598   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:56:23.673642   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:56:23.673674   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:56:23.673683   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:56:23.673707   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:56:23.673729   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:56:23.673751   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:56:23.673787   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:56:23.673812   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.673825   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.673837   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:23.674359   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:56:23.698874   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:56:23.720936   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:56:23.742507   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:56:23.763840   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 23:56:23.787615   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:56:23.809303   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:56:23.830746   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:56:23.852712   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:56:23.874053   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:56:23.895302   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:56:23.916675   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:56:23.931879   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:56:23.937416   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:56:23.947727   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.951899   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.951951   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:56:23.957401   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:56:23.967356   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:56:23.977327   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.981332   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.981387   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:56:23.986550   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:56:23.996547   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:56:24.006567   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.010520   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.010574   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:56:24.015659   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:56:24.028404   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:56:24.032403   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:56:24.032453   36539 kubeadm.go:392] StartCluster: {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:56:24.032536   36539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:56:24.032593   36539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:56:24.081213   36539 cri.go:89] found id: ""
	I1204 23:56:24.081287   36539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:56:24.095063   36539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:56:24.104106   36539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:56:24.113190   36539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:56:24.113206   36539 kubeadm.go:157] found existing configuration files:
	
	I1204 23:56:24.113242   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:56:24.121657   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:56:24.121703   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:56:24.130638   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:56:24.139019   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:56:24.139075   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:56:24.147703   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:56:24.156160   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:56:24.156205   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:56:24.165072   36539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:56:24.173473   36539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:56:24.173525   36539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:56:24.182301   36539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 23:56:24.376697   36539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:56:34.579933   36539 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:56:34.580018   36539 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:56:34.580122   36539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:56:34.580249   36539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:56:34.580403   36539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:56:34.580494   36539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:56:34.582178   36539 out.go:235]   - Generating certificates and keys ...
	I1204 23:56:34.582281   36539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:56:34.582355   36539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:56:34.582417   36539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:56:34.582467   36539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:56:34.582518   36539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:56:34.582564   36539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:56:34.582636   36539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:56:34.582772   36539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-168758 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1204 23:56:34.582829   36539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:56:34.582931   36539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-168758 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1204 23:56:34.582990   36539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:56:34.583041   36539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:56:34.583090   36539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:56:34.583139   36539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:56:34.583188   36539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:56:34.583239   36539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:56:34.583290   36539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:56:34.583344   36539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:56:34.583394   36539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:56:34.583463   36539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:56:34.583529   36539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:56:34.585082   36539 out.go:235]   - Booting up control plane ...
	I1204 23:56:34.585162   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:56:34.585225   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:56:34.585285   36539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:56:34.585417   36539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:56:34.585527   36539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:56:34.585583   36539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:56:34.585716   36539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:56:34.585831   36539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:56:34.585888   36539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.206497ms
	I1204 23:56:34.585961   36539 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:56:34.586024   36539 kubeadm.go:310] [api-check] The API server is healthy after 5.989146948s
	I1204 23:56:34.586115   36539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:56:34.586292   36539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:56:34.586380   36539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:56:34.586600   36539 kubeadm.go:310] [mark-control-plane] Marking the node ha-168758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:56:34.586682   36539 kubeadm.go:310] [bootstrap-token] Using token: xmluo7.j7yx8f6chysik4ve
	I1204 23:56:34.588072   36539 out.go:235]   - Configuring RBAC rules ...
	I1204 23:56:34.588170   36539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:56:34.588259   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:56:34.588422   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:56:34.588567   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:56:34.588682   36539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:56:34.588762   36539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:56:34.588863   36539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:56:34.588922   36539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:56:34.588962   36539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:56:34.588968   36539 kubeadm.go:310] 
	I1204 23:56:34.589020   36539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:56:34.589026   36539 kubeadm.go:310] 
	I1204 23:56:34.589098   36539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:56:34.589104   36539 kubeadm.go:310] 
	I1204 23:56:34.589125   36539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:56:34.589179   36539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:56:34.589225   36539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:56:34.589231   36539 kubeadm.go:310] 
	I1204 23:56:34.589295   36539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:56:34.589308   36539 kubeadm.go:310] 
	I1204 23:56:34.589352   36539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:56:34.589359   36539 kubeadm.go:310] 
	I1204 23:56:34.589401   36539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:56:34.589475   36539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:56:34.589570   36539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:56:34.589580   36539 kubeadm.go:310] 
	I1204 23:56:34.589656   36539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:56:34.589727   36539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:56:34.589735   36539 kubeadm.go:310] 
	I1204 23:56:34.589810   36539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xmluo7.j7yx8f6chysik4ve \
	I1204 23:56:34.589952   36539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c \
	I1204 23:56:34.589986   36539 kubeadm.go:310] 	--control-plane 
	I1204 23:56:34.589993   36539 kubeadm.go:310] 
	I1204 23:56:34.590071   36539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:56:34.590078   36539 kubeadm.go:310] 
	I1204 23:56:34.590151   36539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xmluo7.j7yx8f6chysik4ve \
	I1204 23:56:34.590255   36539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c 
	I1204 23:56:34.590265   36539 cni.go:84] Creating CNI manager for ""
	I1204 23:56:34.590270   36539 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 23:56:34.591672   36539 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 23:56:34.592995   36539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 23:56:34.598320   36539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 23:56:34.598340   36539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 23:56:34.616335   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 23:56:34.967246   36539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:56:34.967343   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:34.967389   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758 minikube.k8s.io/updated_at=2024_12_04T23_56_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=true
	I1204 23:56:35.009341   36539 ops.go:34] apiserver oom_adj: -16
	I1204 23:56:35.191577   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:35.691603   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:36.192181   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:36.691747   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.192577   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.691663   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:56:37.837937   36539 kubeadm.go:1113] duration metric: took 2.870656691s to wait for elevateKubeSystemPrivileges
	I1204 23:56:37.837999   36539 kubeadm.go:394] duration metric: took 13.80554404s to StartCluster
	I1204 23:56:37.838021   36539 settings.go:142] acquiring lock: {Name:mkdafc0119272d33e86aa3cd16389b126b5518c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:37.838113   36539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:56:37.838945   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/kubeconfig: {Name:mkdcd2310e2ff57e3c85e12df202da266115cf1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:56:37.839186   36539 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:56:37.839215   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:56:37.839197   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:56:37.839216   36539 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 23:56:37.839321   36539 addons.go:69] Setting storage-provisioner=true in profile "ha-168758"
	I1204 23:56:37.839348   36539 addons.go:234] Setting addon storage-provisioner=true in "ha-168758"
	I1204 23:56:37.839351   36539 addons.go:69] Setting default-storageclass=true in profile "ha-168758"
	I1204 23:56:37.839375   36539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-168758"
	I1204 23:56:37.839380   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:37.839384   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:56:37.839759   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.839800   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.839824   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.839863   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.854550   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1204 23:56:37.854839   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1204 23:56:37.854964   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.855259   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.855493   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.855517   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.855736   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.855762   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.855853   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.855980   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.856068   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.856674   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.856721   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.858200   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:56:37.858527   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 23:56:37.859020   36539 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 23:56:37.859271   36539 addons.go:234] Setting addon default-storageclass=true in "ha-168758"
	I1204 23:56:37.859308   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:56:37.859664   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.859703   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.872166   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I1204 23:56:37.872824   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.873354   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.873383   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.873694   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.873898   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.873955   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
	I1204 23:56:37.874272   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.874700   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.874726   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.875097   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.875640   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:37.875644   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:37.875720   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:37.877494   36539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:56:37.878808   36539 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:56:37.878822   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:56:37.878836   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:37.882048   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.882510   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:37.882542   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.882617   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:37.882791   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:37.882930   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:37.883065   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:37.890703   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1204 23:56:37.891111   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:37.891600   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:37.891624   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:37.891905   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:37.892126   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:56:37.893659   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:56:37.893845   36539 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:56:37.893861   36539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:56:37.893876   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:56:37.896603   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.897013   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:56:37.897035   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:56:37.897191   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:56:37.897338   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:56:37.897461   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:56:37.897562   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:56:37.981170   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:56:38.100674   36539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:56:38.124529   36539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:56:38.478625   36539 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1204 23:56:38.478692   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.478710   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.478984   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.479004   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.479014   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.479022   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.479240   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.479255   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.479326   36539 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 23:56:38.479347   36539 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 23:56:38.479442   36539 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1204 23:56:38.479452   36539 round_trippers.go:469] Request Headers:
	I1204 23:56:38.479462   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:56:38.479469   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:56:38.494926   36539 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1204 23:56:38.495499   36539 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1204 23:56:38.495513   36539 round_trippers.go:469] Request Headers:
	I1204 23:56:38.495520   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:56:38.495529   36539 round_trippers.go:473]     Content-Type: application/json
	I1204 23:56:38.495532   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:56:38.504036   36539 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1204 23:56:38.504185   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.504196   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.504490   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.504510   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.504543   36539 main.go:141] libmachine: (ha-168758) DBG | Closing plugin on server side
	I1204 23:56:38.728533   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.728559   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.728849   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.728876   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.728885   36539 main.go:141] libmachine: Making call to close driver server
	I1204 23:56:38.728893   36539 main.go:141] libmachine: (ha-168758) Calling .Close
	I1204 23:56:38.729129   36539 main.go:141] libmachine: (ha-168758) DBG | Closing plugin on server side
	I1204 23:56:38.729202   36539 main.go:141] libmachine: Successfully made call to close driver server
	I1204 23:56:38.729219   36539 main.go:141] libmachine: Making call to close connection to plugin binary
	I1204 23:56:38.730946   36539 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1204 23:56:38.732198   36539 addons.go:510] duration metric: took 892.982269ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1204 23:56:38.732240   36539 start.go:246] waiting for cluster config update ...
	I1204 23:56:38.732263   36539 start.go:255] writing updated cluster config ...
	I1204 23:56:38.733842   36539 out.go:201] 
	I1204 23:56:38.735235   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:56:38.735344   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:38.736922   36539 out.go:177] * Starting "ha-168758-m02" control-plane node in "ha-168758" cluster
	I1204 23:56:38.738175   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:56:38.738201   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:56:38.738327   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:56:38.738348   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:56:38.738442   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:56:38.738692   36539 start.go:360] acquireMachinesLock for ha-168758-m02: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:56:38.738753   36539 start.go:364] duration metric: took 32.863µs to acquireMachinesLock for "ha-168758-m02"
	I1204 23:56:38.738778   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:56:38.738869   36539 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1204 23:56:38.740536   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:56:38.740652   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:56:38.740698   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:56:38.755586   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I1204 23:56:38.756099   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:56:38.756624   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:56:38.756663   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:56:38.757003   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:56:38.757205   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:56:38.757343   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:56:38.757484   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:56:38.757510   36539 client.go:168] LocalClient.Create starting
	I1204 23:56:38.757536   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:56:38.757571   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:56:38.757584   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:56:38.757629   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:56:38.757661   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:56:38.757671   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:56:38.757684   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:56:38.757692   36539 main.go:141] libmachine: (ha-168758-m02) Calling .PreCreateCheck
	I1204 23:56:38.757862   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:56:38.758275   36539 main.go:141] libmachine: Creating machine...
	I1204 23:56:38.758289   36539 main.go:141] libmachine: (ha-168758-m02) Calling .Create
	I1204 23:56:38.758443   36539 main.go:141] libmachine: (ha-168758-m02) Creating KVM machine...
	I1204 23:56:38.759771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found existing default KVM network
	I1204 23:56:38.759905   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found existing private KVM network mk-ha-168758
	I1204 23:56:38.760010   36539 main.go:141] libmachine: (ha-168758-m02) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 ...
	I1204 23:56:38.760033   36539 main.go:141] libmachine: (ha-168758-m02) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:56:38.760144   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:38.760017   36906 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:56:38.760207   36539 main.go:141] libmachine: (ha-168758-m02) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:56:39.002966   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.002856   36906 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa...
	I1204 23:56:39.068744   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.068644   36906 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/ha-168758-m02.rawdisk...
	I1204 23:56:39.068771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Writing magic tar header
	I1204 23:56:39.068781   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Writing SSH key tar header
	I1204 23:56:39.068788   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:39.068754   36906 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 ...
	I1204 23:56:39.068880   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02
	I1204 23:56:39.068913   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02 (perms=drwx------)
	I1204 23:56:39.068925   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:56:39.068946   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:56:39.068960   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:56:39.068973   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:56:39.068987   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:56:39.068997   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:56:39.069012   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:56:39.069021   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Checking permissions on dir: /home
	I1204 23:56:39.069037   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Skipping /home - not owner
	I1204 23:56:39.069073   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:56:39.069102   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:56:39.069123   36539 main.go:141] libmachine: (ha-168758-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:56:39.069137   36539 main.go:141] libmachine: (ha-168758-m02) Creating domain...
	I1204 23:56:39.070045   36539 main.go:141] libmachine: (ha-168758-m02) define libvirt domain using xml: 
	I1204 23:56:39.070076   36539 main.go:141] libmachine: (ha-168758-m02) <domain type='kvm'>
	I1204 23:56:39.070100   36539 main.go:141] libmachine: (ha-168758-m02)   <name>ha-168758-m02</name>
	I1204 23:56:39.070118   36539 main.go:141] libmachine: (ha-168758-m02)   <memory unit='MiB'>2200</memory>
	I1204 23:56:39.070139   36539 main.go:141] libmachine: (ha-168758-m02)   <vcpu>2</vcpu>
	I1204 23:56:39.070157   36539 main.go:141] libmachine: (ha-168758-m02)   <features>
	I1204 23:56:39.070172   36539 main.go:141] libmachine: (ha-168758-m02)     <acpi/>
	I1204 23:56:39.070188   36539 main.go:141] libmachine: (ha-168758-m02)     <apic/>
	I1204 23:56:39.070199   36539 main.go:141] libmachine: (ha-168758-m02)     <pae/>
	I1204 23:56:39.070210   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070233   36539 main.go:141] libmachine: (ha-168758-m02)   </features>
	I1204 23:56:39.070244   36539 main.go:141] libmachine: (ha-168758-m02)   <cpu mode='host-passthrough'>
	I1204 23:56:39.070251   36539 main.go:141] libmachine: (ha-168758-m02)   
	I1204 23:56:39.070266   36539 main.go:141] libmachine: (ha-168758-m02)   </cpu>
	I1204 23:56:39.070275   36539 main.go:141] libmachine: (ha-168758-m02)   <os>
	I1204 23:56:39.070282   36539 main.go:141] libmachine: (ha-168758-m02)     <type>hvm</type>
	I1204 23:56:39.070293   36539 main.go:141] libmachine: (ha-168758-m02)     <boot dev='cdrom'/>
	I1204 23:56:39.070303   36539 main.go:141] libmachine: (ha-168758-m02)     <boot dev='hd'/>
	I1204 23:56:39.070312   36539 main.go:141] libmachine: (ha-168758-m02)     <bootmenu enable='no'/>
	I1204 23:56:39.070322   36539 main.go:141] libmachine: (ha-168758-m02)   </os>
	I1204 23:56:39.070334   36539 main.go:141] libmachine: (ha-168758-m02)   <devices>
	I1204 23:56:39.070343   36539 main.go:141] libmachine: (ha-168758-m02)     <disk type='file' device='cdrom'>
	I1204 23:56:39.070361   36539 main.go:141] libmachine: (ha-168758-m02)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/boot2docker.iso'/>
	I1204 23:56:39.070369   36539 main.go:141] libmachine: (ha-168758-m02)       <target dev='hdc' bus='scsi'/>
	I1204 23:56:39.070376   36539 main.go:141] libmachine: (ha-168758-m02)       <readonly/>
	I1204 23:56:39.070387   36539 main.go:141] libmachine: (ha-168758-m02)     </disk>
	I1204 23:56:39.070396   36539 main.go:141] libmachine: (ha-168758-m02)     <disk type='file' device='disk'>
	I1204 23:56:39.070401   36539 main.go:141] libmachine: (ha-168758-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:56:39.070412   36539 main.go:141] libmachine: (ha-168758-m02)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/ha-168758-m02.rawdisk'/>
	I1204 23:56:39.070427   36539 main.go:141] libmachine: (ha-168758-m02)       <target dev='hda' bus='virtio'/>
	I1204 23:56:39.070435   36539 main.go:141] libmachine: (ha-168758-m02)     </disk>
	I1204 23:56:39.070439   36539 main.go:141] libmachine: (ha-168758-m02)     <interface type='network'>
	I1204 23:56:39.070447   36539 main.go:141] libmachine: (ha-168758-m02)       <source network='mk-ha-168758'/>
	I1204 23:56:39.070452   36539 main.go:141] libmachine: (ha-168758-m02)       <model type='virtio'/>
	I1204 23:56:39.070459   36539 main.go:141] libmachine: (ha-168758-m02)     </interface>
	I1204 23:56:39.070463   36539 main.go:141] libmachine: (ha-168758-m02)     <interface type='network'>
	I1204 23:56:39.070468   36539 main.go:141] libmachine: (ha-168758-m02)       <source network='default'/>
	I1204 23:56:39.070475   36539 main.go:141] libmachine: (ha-168758-m02)       <model type='virtio'/>
	I1204 23:56:39.070479   36539 main.go:141] libmachine: (ha-168758-m02)     </interface>
	I1204 23:56:39.070483   36539 main.go:141] libmachine: (ha-168758-m02)     <serial type='pty'>
	I1204 23:56:39.070500   36539 main.go:141] libmachine: (ha-168758-m02)       <target port='0'/>
	I1204 23:56:39.070515   36539 main.go:141] libmachine: (ha-168758-m02)     </serial>
	I1204 23:56:39.070526   36539 main.go:141] libmachine: (ha-168758-m02)     <console type='pty'>
	I1204 23:56:39.070533   36539 main.go:141] libmachine: (ha-168758-m02)       <target type='serial' port='0'/>
	I1204 23:56:39.070539   36539 main.go:141] libmachine: (ha-168758-m02)     </console>
	I1204 23:56:39.070547   36539 main.go:141] libmachine: (ha-168758-m02)     <rng model='virtio'>
	I1204 23:56:39.070561   36539 main.go:141] libmachine: (ha-168758-m02)       <backend model='random'>/dev/random</backend>
	I1204 23:56:39.070568   36539 main.go:141] libmachine: (ha-168758-m02)     </rng>
	I1204 23:56:39.070579   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070586   36539 main.go:141] libmachine: (ha-168758-m02)     
	I1204 23:56:39.070604   36539 main.go:141] libmachine: (ha-168758-m02)   </devices>
	I1204 23:56:39.070621   36539 main.go:141] libmachine: (ha-168758-m02) </domain>
	I1204 23:56:39.070635   36539 main.go:141] libmachine: (ha-168758-m02) 
	I1204 23:56:39.077113   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:a2:74:ea in network default
	I1204 23:56:39.077678   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring networks are active...
	I1204 23:56:39.077701   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:39.078445   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring network default is active
	I1204 23:56:39.078865   36539 main.go:141] libmachine: (ha-168758-m02) Ensuring network mk-ha-168758 is active
	I1204 23:56:39.079237   36539 main.go:141] libmachine: (ha-168758-m02) Getting domain xml...
	I1204 23:56:39.079930   36539 main.go:141] libmachine: (ha-168758-m02) Creating domain...
	I1204 23:56:40.265017   36539 main.go:141] libmachine: (ha-168758-m02) Waiting to get IP...
	I1204 23:56:40.265754   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.266145   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.266182   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.266132   36906 retry.go:31] will retry after 272.630461ms: waiting for machine to come up
	I1204 23:56:40.540436   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.540778   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.540807   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.540731   36906 retry.go:31] will retry after 280.679773ms: waiting for machine to come up
	I1204 23:56:40.823193   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:40.823606   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:40.823628   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:40.823550   36906 retry.go:31] will retry after 428.494355ms: waiting for machine to come up
	I1204 23:56:41.253121   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:41.253634   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:41.253657   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:41.253584   36906 retry.go:31] will retry after 443.923324ms: waiting for machine to come up
	I1204 23:56:41.698819   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:41.699183   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:41.699213   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:41.699129   36906 retry.go:31] will retry after 522.039893ms: waiting for machine to come up
	I1204 23:56:42.222513   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:42.223045   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:42.223075   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:42.222991   36906 retry.go:31] will retry after 778.146913ms: waiting for machine to come up
	I1204 23:56:43.002368   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:43.002725   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:43.002748   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:43.002697   36906 retry.go:31] will retry after 985.520994ms: waiting for machine to come up
	I1204 23:56:43.990537   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:43.991046   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:43.991072   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:43.991013   36906 retry.go:31] will retry after 1.065554597s: waiting for machine to come up
	I1204 23:56:45.057549   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:45.057997   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:45.058038   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:45.057954   36906 retry.go:31] will retry after 1.75203222s: waiting for machine to come up
	I1204 23:56:46.812766   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:46.813189   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:46.813218   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:46.813128   36906 retry.go:31] will retry after 1.936174375s: waiting for machine to come up
	I1204 23:56:48.752161   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:48.752595   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:48.752615   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:48.752561   36906 retry.go:31] will retry after 2.006049391s: waiting for machine to come up
	I1204 23:56:50.760948   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:50.761423   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:50.761448   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:50.761364   36906 retry.go:31] will retry after 3.635507158s: waiting for machine to come up
	I1204 23:56:54.398466   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:54.398888   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:54.398919   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:54.398858   36906 retry.go:31] will retry after 3.280789577s: waiting for machine to come up
	I1204 23:56:57.683343   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:56:57.683888   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find current IP address of domain ha-168758-m02 in network mk-ha-168758
	I1204 23:56:57.683924   36539 main.go:141] libmachine: (ha-168758-m02) DBG | I1204 23:56:57.683859   36906 retry.go:31] will retry after 5.640187241s: waiting for machine to come up
	I1204 23:57:03.329060   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.329506   36539 main.go:141] libmachine: (ha-168758-m02) Found IP for machine: 192.168.39.70
	I1204 23:57:03.329530   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has current primary IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.329536   36539 main.go:141] libmachine: (ha-168758-m02) Reserving static IP address...
	I1204 23:57:03.329911   36539 main.go:141] libmachine: (ha-168758-m02) DBG | unable to find host DHCP lease matching {name: "ha-168758-m02", mac: "52:54:00:e2:30:30", ip: "192.168.39.70"} in network mk-ha-168758
	I1204 23:57:03.403859   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Getting to WaitForSSH function...
	I1204 23:57:03.403880   36539 main.go:141] libmachine: (ha-168758-m02) Reserved static IP address: 192.168.39.70
	I1204 23:57:03.403891   36539 main.go:141] libmachine: (ha-168758-m02) Waiting for SSH to be available...
	I1204 23:57:03.406504   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.407026   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.407061   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.407259   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using SSH client type: external
	I1204 23:57:03.407286   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa (-rw-------)
	I1204 23:57:03.407378   36539 main.go:141] libmachine: (ha-168758-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:57:03.407404   36539 main.go:141] libmachine: (ha-168758-m02) DBG | About to run SSH command:
	I1204 23:57:03.407413   36539 main.go:141] libmachine: (ha-168758-m02) DBG | exit 0
	I1204 23:57:03.534727   36539 main.go:141] libmachine: (ha-168758-m02) DBG | SSH cmd err, output: <nil>: 
	I1204 23:57:03.535017   36539 main.go:141] libmachine: (ha-168758-m02) KVM machine creation complete!
	I1204 23:57:03.535420   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:57:03.536018   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:03.536208   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:03.536363   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:57:03.536382   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetState
	I1204 23:57:03.537483   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:57:03.537500   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:57:03.537508   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:57:03.537516   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.539751   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.540081   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.540109   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.540281   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.540440   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.540587   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.540727   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.540880   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.541129   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.541145   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:57:03.649219   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:57:03.649249   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:57:03.649261   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.652199   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.652771   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.652856   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.652885   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.653079   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.653247   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.653419   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.653613   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.653828   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.653842   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:57:03.763190   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:57:03.763346   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:57:03.763361   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:57:03.763371   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:03.763602   36539 buildroot.go:166] provisioning hostname "ha-168758-m02"
	I1204 23:57:03.763632   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:03.763818   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.766973   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.767390   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.767437   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.767603   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.767818   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.768055   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.768271   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.768440   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.768709   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.768726   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758-m02 && echo "ha-168758-m02" | sudo tee /etc/hostname
	I1204 23:57:03.892672   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758-m02
	
	I1204 23:57:03.892702   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:03.895144   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.895422   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:03.895442   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:03.895624   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:03.895781   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.895901   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:03.896009   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:03.896154   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:03.896362   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:03.896378   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:57:04.014565   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:57:04.014591   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:57:04.014605   36539 buildroot.go:174] setting up certificates
	I1204 23:57:04.014613   36539 provision.go:84] configureAuth start
	I1204 23:57:04.014620   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetMachineName
	I1204 23:57:04.014862   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.017595   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.017892   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.017919   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.018050   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.020121   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.020411   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.020432   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.020630   36539 provision.go:143] copyHostCerts
	I1204 23:57:04.020662   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:57:04.020697   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:57:04.020725   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:57:04.020805   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:57:04.020931   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:57:04.020973   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:57:04.020984   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:57:04.021027   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:57:04.021089   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:57:04.021109   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:57:04.021118   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:57:04.021154   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:57:04.021216   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758-m02 san=[127.0.0.1 192.168.39.70 ha-168758-m02 localhost minikube]
	I1204 23:57:04.229700   36539 provision.go:177] copyRemoteCerts
	I1204 23:57:04.229780   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:57:04.229810   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.233253   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.233650   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.233681   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.233956   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.234161   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.234318   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.234434   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.316064   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:57:04.316143   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:57:04.338512   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:57:04.338585   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:57:04.360119   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:57:04.360178   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:57:04.381996   36539 provision.go:87] duration metric: took 367.37237ms to configureAuth
	I1204 23:57:04.382022   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:57:04.382234   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:04.382318   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.384736   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.385043   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.385085   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.385263   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.385456   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.385634   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.385765   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.385893   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:04.386085   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:04.386102   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:57:04.615454   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:57:04.615480   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:57:04.615488   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetURL
	I1204 23:57:04.616607   36539 main.go:141] libmachine: (ha-168758-m02) DBG | Using libvirt version 6000000
	I1204 23:57:04.618773   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.619138   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.619174   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.619325   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:57:04.619342   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:57:04.619349   36539 client.go:171] duration metric: took 25.861833034s to LocalClient.Create
	I1204 23:57:04.619371   36539 start.go:167] duration metric: took 25.86188766s to libmachine.API.Create "ha-168758"
	I1204 23:57:04.619385   36539 start.go:293] postStartSetup for "ha-168758-m02" (driver="kvm2")
	I1204 23:57:04.619400   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:57:04.619421   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.619646   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:57:04.619675   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.622076   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.622449   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.622480   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.622622   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.622800   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.622964   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.623116   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.712567   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:57:04.716652   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:57:04.716681   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:57:04.716761   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:57:04.716858   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:57:04.716869   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:57:04.716975   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:57:04.726044   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:57:04.748709   36539 start.go:296] duration metric: took 129.292597ms for postStartSetup
	I1204 23:57:04.748769   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetConfigRaw
	I1204 23:57:04.749436   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.751961   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.752346   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.752376   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.752616   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:04.752793   36539 start.go:128] duration metric: took 26.013914076s to createHost
	I1204 23:57:04.752815   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.755319   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.755652   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.755682   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.755827   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.755971   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.756086   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.756189   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.756307   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:57:04.756479   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1204 23:57:04.756492   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:57:04.866426   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356624.824376765
	
	I1204 23:57:04.866448   36539 fix.go:216] guest clock: 1733356624.824376765
	I1204 23:57:04.866456   36539 fix.go:229] Guest: 2024-12-04 23:57:04.824376765 +0000 UTC Remote: 2024-12-04 23:57:04.752806033 +0000 UTC m=+70.097491807 (delta=71.570732ms)
	I1204 23:57:04.866470   36539 fix.go:200] guest clock delta is within tolerance: 71.570732ms
	I1204 23:57:04.866475   36539 start.go:83] releasing machines lock for "ha-168758-m02", held for 26.12770888s
	I1204 23:57:04.866493   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.866780   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:04.869289   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.869690   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.869723   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.871575   36539 out.go:177] * Found network options:
	I1204 23:57:04.872861   36539 out.go:177]   - NO_PROXY=192.168.39.246
	W1204 23:57:04.874174   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:57:04.874219   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.874854   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.875047   36539 main.go:141] libmachine: (ha-168758-m02) Calling .DriverName
	I1204 23:57:04.875134   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:57:04.875168   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	W1204 23:57:04.875274   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:57:04.875348   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:57:04.875364   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHHostname
	I1204 23:57:04.877779   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878165   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878255   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.878300   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878462   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.878636   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.878704   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:04.878726   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:04.878789   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.878869   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHPort
	I1204 23:57:04.878940   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:04.878980   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHKeyPath
	I1204 23:57:04.879169   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetSSHUsername
	I1204 23:57:04.879295   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m02/id_rsa Username:docker}
	I1204 23:57:05.115514   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:57:05.121578   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:57:05.121649   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:57:05.137141   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:57:05.137166   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:57:05.137249   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:57:05.153397   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:57:05.167629   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:57:05.167697   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:57:05.180991   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:57:05.194817   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:57:05.319862   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:57:05.480267   36539 docker.go:233] disabling docker service ...
	I1204 23:57:05.480352   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:57:05.494063   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:57:05.506104   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:57:05.622700   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:57:05.745424   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:57:05.759393   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:57:05.776697   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:57:05.776762   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.787070   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:57:05.787137   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.797124   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.807348   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.816991   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:57:05.826982   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.836497   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.852423   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:57:05.862453   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:57:05.871289   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:57:05.871338   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:57:05.883727   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:57:05.893241   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:06.013483   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:57:06.107092   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:57:06.107171   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:57:06.112530   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:57:06.112612   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:57:06.116309   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:57:06.157302   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:57:06.157371   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:57:06.184889   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:57:06.212833   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:57:06.214532   36539 out.go:177]   - env NO_PROXY=192.168.39.246
	I1204 23:57:06.215684   36539 main.go:141] libmachine: (ha-168758-m02) Calling .GetIP
	I1204 23:57:06.218318   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:06.218690   36539 main.go:141] libmachine: (ha-168758-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:30", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:53 +0000 UTC Type:0 Mac:52:54:00:e2:30:30 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-168758-m02 Clientid:01:52:54:00:e2:30:30}
	I1204 23:57:06.218715   36539 main.go:141] libmachine: (ha-168758-m02) DBG | domain ha-168758-m02 has defined IP address 192.168.39.70 and MAC address 52:54:00:e2:30:30 in network mk-ha-168758
	I1204 23:57:06.218889   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:57:06.222805   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:57:06.234260   36539 mustload.go:65] Loading cluster: ha-168758
	I1204 23:57:06.234431   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:06.234663   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:06.234694   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:06.249086   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I1204 23:57:06.249474   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:06.249897   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:06.249916   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:06.250208   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:06.250399   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:57:06.251961   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:57:06.252244   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:06.252279   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:06.266555   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I1204 23:57:06.267012   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:06.267500   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:06.267519   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:06.267807   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:06.267965   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:57:06.268109   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.70
	I1204 23:57:06.268124   36539 certs.go:194] generating shared ca certs ...
	I1204 23:57:06.268142   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.268293   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:57:06.268346   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:57:06.268359   36539 certs.go:256] generating profile certs ...
	I1204 23:57:06.268451   36539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:57:06.268485   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d
	I1204 23:57:06.268506   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.254]
	I1204 23:57:06.433427   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d ...
	I1204 23:57:06.433455   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d: {Name:mk9b5ab41b84365c7c87c13c8ccd0d5c3db76aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.433623   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d ...
	I1204 23:57:06.433643   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d: {Name:mk61c76c74eaed61a46e313e778d5b43d1ba813c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:57:06.433739   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.2a89e12d -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:57:06.433896   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.2a89e12d -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:57:06.434089   36539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:57:06.434105   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:57:06.434120   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:57:06.434140   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:57:06.434158   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:57:06.434176   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:57:06.434192   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:57:06.434208   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:57:06.434229   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:57:06.434295   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:57:06.434333   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:57:06.434347   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:57:06.434398   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:57:06.434429   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:57:06.434462   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:57:06.434512   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:57:06.434550   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:57:06.434571   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.434596   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:57:06.434632   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:57:06.437677   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:06.438091   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:57:06.438121   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:06.438323   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:57:06.438484   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:57:06.438627   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:57:06.438715   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:57:06.518370   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 23:57:06.523710   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 23:57:06.534968   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 23:57:06.539161   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1204 23:57:06.549270   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 23:57:06.553348   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 23:57:06.562722   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 23:57:06.566572   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 23:57:06.575817   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 23:57:06.579621   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 23:57:06.588791   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 23:57:06.592406   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 23:57:06.601971   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:57:06.626102   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:57:06.647746   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:57:06.670354   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:57:06.700509   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1204 23:57:06.722552   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:57:06.745113   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:57:06.767304   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:57:06.789718   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:57:06.812073   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:57:06.834754   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:57:06.857779   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 23:57:06.872839   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1204 23:57:06.887702   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 23:57:06.903420   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 23:57:06.918686   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 23:57:06.933715   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 23:57:06.948744   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 23:57:06.963779   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:57:06.969267   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:57:06.978970   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.982989   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.983047   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:57:06.988382   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:57:06.997926   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:57:07.007572   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.011739   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.011796   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:57:07.017095   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:57:07.027066   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:57:07.037055   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.041161   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.041220   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:57:07.046426   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:57:07.056376   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:57:07.060388   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:57:07.060436   36539 kubeadm.go:934] updating node {m02 192.168.39.70 8443 v1.31.2 crio true true} ...
	I1204 23:57:07.060522   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:57:07.060547   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:57:07.060575   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:57:07.074631   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:57:07.074712   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:57:07.074777   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:57:07.084179   36539 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 23:57:07.084224   36539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 23:57:07.093203   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 23:57:07.093229   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:57:07.093277   36539 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1204 23:57:07.093292   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:57:07.093312   36539 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1204 23:57:07.097086   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 23:57:07.097113   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 23:57:08.011623   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:57:08.011699   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:57:08.017164   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 23:57:08.017200   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 23:57:08.175961   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:57:08.200946   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:57:08.201051   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:57:08.211652   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 23:57:08.211693   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 23:57:08.563181   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 23:57:08.571736   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1204 23:57:08.586849   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:57:08.601801   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 23:57:08.616859   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:57:08.620378   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:57:08.631234   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:08.746289   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:57:08.762474   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:57:08.762951   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:08.763010   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:08.777798   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I1204 23:57:08.778308   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:08.778784   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:08.778801   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:08.779085   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:08.779279   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:57:08.779432   36539 start.go:317] joinCluster: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:57:08.779528   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 23:57:08.779552   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:57:08.782464   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:08.782947   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:57:08.782974   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:57:08.783150   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:57:08.783306   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:57:08.783439   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:57:08.783593   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:57:08.959692   36539 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:08.959734   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lbtrqy.cl9b45dngbh3w53w --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m02 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443"
	I1204 23:57:30.700354   36539 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lbtrqy.cl9b45dngbh3w53w --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m02 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443": (21.740590437s)
	I1204 23:57:30.700399   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 23:57:31.260769   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758-m02 minikube.k8s.io/updated_at=2024_12_04T23_57_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=false
	I1204 23:57:31.388605   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-168758-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 23:57:31.507247   36539 start.go:319] duration metric: took 22.727809032s to joinCluster
	I1204 23:57:31.507330   36539 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:31.507657   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:31.510077   36539 out.go:177] * Verifying Kubernetes components...
	I1204 23:57:31.511382   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:57:31.807145   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:57:31.866005   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:57:31.866366   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 23:57:31.866446   36539 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I1204 23:57:31.866697   36539 node_ready.go:35] waiting up to 6m0s for node "ha-168758-m02" to be "Ready" ...
	I1204 23:57:31.866802   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:31.866815   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:31.866826   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:31.866832   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:31.894115   36539 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1204 23:57:32.367519   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:32.367541   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:32.367548   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:32.367552   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:32.372462   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:32.867267   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:32.867297   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:32.867307   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:32.867312   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:32.870593   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:33.367165   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:33.367188   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:33.367199   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:33.367204   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:33.371115   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:33.867446   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:33.867467   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:33.867490   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:33.867496   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:33.895642   36539 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1204 23:57:33.896043   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:34.367204   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:34.367227   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:34.367236   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:34.367241   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:34.370493   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:34.867119   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:34.867142   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:34.867150   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:34.867153   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:34.871875   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:35.367013   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:35.367043   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:35.367054   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:35.367059   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:35.369963   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:35.867691   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:35.867711   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:35.867718   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:35.867722   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:35.873237   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:57:36.366914   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:36.366938   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:36.366946   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:36.366957   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:36.369646   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:36.370291   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:36.867182   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:36.867206   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:36.867216   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:36.867221   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:36.870020   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:37.367042   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:37.367063   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:37.367071   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:37.367074   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:37.370425   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:37.867722   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:37.867746   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:37.867757   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:37.867762   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:37.870832   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:38.367880   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:38.367907   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:38.367919   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:38.367925   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:38.371699   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:38.374216   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:38.866862   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:38.866882   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:38.866889   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:38.866894   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:38.870442   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:39.367314   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:39.367335   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:39.367343   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:39.367347   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:39.370397   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:39.867083   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:39.867112   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:39.867124   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:39.867131   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:39.869764   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.367183   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:40.367204   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:40.367212   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:40.367216   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:40.370122   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.866955   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:40.866977   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:40.866985   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:40.866989   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:40.869893   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:40.870550   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:41.367838   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:41.367867   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:41.367877   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:41.367883   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:41.371252   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:41.867051   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:41.867074   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:41.867081   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:41.867086   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:41.869735   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:42.367874   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:42.367896   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:42.367904   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:42.367908   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:42.370905   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:42.867902   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:42.867923   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:42.867930   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:42.867936   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:42.871077   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:42.871630   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:43.367020   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:43.367041   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:43.367049   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:43.367053   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:43.369800   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:43.867917   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:43.867935   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:43.867943   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:43.867948   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:43.871058   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:44.366921   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:44.366944   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:44.366952   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:44.366956   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:44.369844   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:44.867849   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:44.867871   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:44.867878   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:44.867882   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:44.871084   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:45.367038   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:45.367064   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:45.367072   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:45.367076   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:45.370136   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:45.370824   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:45.867201   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:45.867222   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:45.867230   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:45.867234   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:45.870440   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:46.367088   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:46.367111   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:46.367122   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:46.367128   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:46.370360   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:46.867494   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:46.867515   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:46.867523   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:46.867527   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:46.870477   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:47.367351   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:47.367377   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:47.367388   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:47.367395   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:47.370421   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:47.371066   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:47.867351   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:47.867371   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:47.867376   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:47.867380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:47.870561   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:48.367177   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:48.367201   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:48.367208   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:48.367212   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:48.370109   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:48.867913   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:48.867940   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:48.867951   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:48.867957   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:48.870839   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:49.367731   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:49.367752   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:49.367763   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:49.367775   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:49.371251   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:49.371965   36539 node_ready.go:53] node "ha-168758-m02" has status "Ready":"False"
	I1204 23:57:49.867061   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:49.867082   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:49.867091   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:49.867097   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:49.870692   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.367727   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:50.367748   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.367756   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.367760   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.371359   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.372130   36539 node_ready.go:49] node "ha-168758-m02" has status "Ready":"True"
	I1204 23:57:50.372146   36539 node_ready.go:38] duration metric: took 18.505433084s for node "ha-168758-m02" to be "Ready" ...
	I1204 23:57:50.372154   36539 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:57:50.372222   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:50.372232   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.372239   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.372245   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.376642   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:50.381842   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.381907   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rdbhr
	I1204 23:57:50.381916   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.381922   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.381927   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.384768   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.385355   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.385369   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.385375   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.385379   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.387687   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.388085   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.388100   36539 pod_ready.go:82] duration metric: took 6.236474ms for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.388109   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.388153   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgfvm
	I1204 23:57:50.388161   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.388167   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.388170   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.390871   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.391380   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.391394   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.391400   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.391405   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.394126   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.394638   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.394652   36539 pod_ready.go:82] duration metric: took 6.538378ms for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.394659   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.394701   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758
	I1204 23:57:50.394709   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.394715   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.394720   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.397059   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.397459   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.397471   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.397478   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.397482   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.399735   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.400125   36539 pod_ready.go:93] pod "etcd-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.400139   36539 pod_ready.go:82] duration metric: took 5.474765ms for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.400148   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.400188   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m02
	I1204 23:57:50.400196   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.400202   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.400206   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.402193   36539 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 23:57:50.402740   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:50.402752   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.402760   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.402765   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.404874   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:50.405354   36539 pod_ready.go:93] pod "etcd-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.405371   36539 pod_ready.go:82] duration metric: took 5.218099ms for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.405382   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.568781   36539 request.go:632] Waited for 163.320423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:57:50.568837   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:57:50.568845   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.568853   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.568860   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.572195   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.768439   36539 request.go:632] Waited for 195.374469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.768509   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:50.768529   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.768537   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.768542   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.771909   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:50.772411   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:50.772428   36539 pod_ready.go:82] duration metric: took 367.03986ms for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.772438   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:50.968760   36539 request.go:632] Waited for 196.264628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:57:50.968844   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:57:50.968862   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:50.968875   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:50.968883   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:50.972428   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.168517   36539 request.go:632] Waited for 195.379762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.168570   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.168575   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.168582   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.168586   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.171966   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.172839   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.172864   36539 pod_ready.go:82] duration metric: took 400.419668ms for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.172878   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.368601   36539 request.go:632] Waited for 195.644185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:57:51.368673   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:57:51.368681   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.368689   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.368696   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.371786   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.568796   36539 request.go:632] Waited for 196.387928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:51.568847   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:51.568853   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.568859   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.568862   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.572174   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.572929   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.572951   36539 pod_ready.go:82] duration metric: took 400.064616ms for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.572965   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.767794   36539 request.go:632] Waited for 194.737211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:57:51.767865   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:57:51.767871   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.767885   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.767892   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.771726   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.967706   36539 request.go:632] Waited for 195.275144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.967798   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:51.967806   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:51.967816   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:51.967825   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:51.971641   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:51.972143   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:51.972162   36539 pod_ready.go:82] duration metric: took 399.188822ms for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:51.972176   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.168279   36539 request.go:632] Waited for 196.037931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:57:52.168357   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:57:52.168364   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.168371   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.168380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.172095   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.368165   36539 request.go:632] Waited for 195.357162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:52.368231   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:52.368236   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.368244   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.368246   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.371117   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:52.371718   36539 pod_ready.go:93] pod "kube-proxy-s57v4" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:52.371734   36539 pod_ready.go:82] duration metric: took 399.551881ms for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.371744   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.568001   36539 request.go:632] Waited for 196.176636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:57:52.568074   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:57:52.568080   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.568088   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.568092   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.571383   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.768625   36539 request.go:632] Waited for 196.402632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:52.768686   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:52.768691   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.768702   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.768706   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.772096   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:52.772550   36539 pod_ready.go:93] pod "kube-proxy-vjhjm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:52.772567   36539 pod_ready.go:82] duration metric: took 400.817883ms for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.772577   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:52.968672   36539 request.go:632] Waited for 196.025622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:57:52.968728   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:57:52.968736   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:52.968747   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:52.968753   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:52.972192   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.168261   36539 request.go:632] Waited for 195.422832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:53.168312   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:57:53.168321   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.168329   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.168332   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.171764   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.172376   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:53.172393   36539 pod_ready.go:82] duration metric: took 399.810959ms for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.172403   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.368473   36539 request.go:632] Waited for 196.003907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:57:53.368564   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:57:53.368575   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.368586   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.368598   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.371851   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.567754   36539 request.go:632] Waited for 195.293783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:53.567819   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:57:53.567824   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.567831   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.567836   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.570770   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:57:53.571309   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:57:53.571333   36539 pod_ready.go:82] duration metric: took 398.922957ms for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:57:53.571348   36539 pod_ready.go:39] duration metric: took 3.199181638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:57:53.571367   36539 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:57:53.571414   36539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:57:53.585872   36539 api_server.go:72] duration metric: took 22.078506409s to wait for apiserver process to appear ...
	I1204 23:57:53.585898   36539 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:57:53.585919   36539 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1204 23:57:53.591379   36539 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1204 23:57:53.591435   36539 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1204 23:57:53.591440   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.591447   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.591452   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.592362   36539 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 23:57:53.592455   36539 api_server.go:141] control plane version: v1.31.2
	I1204 23:57:53.592472   36539 api_server.go:131] duration metric: took 6.568025ms to wait for apiserver health ...
	I1204 23:57:53.592481   36539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:57:53.767827   36539 request.go:632] Waited for 175.285413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:53.767882   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:53.767888   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.767897   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.767903   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.772373   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:53.776877   36539 system_pods.go:59] 17 kube-system pods found
	I1204 23:57:53.776902   36539 system_pods.go:61] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:57:53.776908   36539 system_pods.go:61] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:57:53.776912   36539 system_pods.go:61] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:57:53.776916   36539 system_pods.go:61] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:57:53.776920   36539 system_pods.go:61] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:57:53.776924   36539 system_pods.go:61] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:57:53.776927   36539 system_pods.go:61] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:57:53.776930   36539 system_pods.go:61] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:57:53.776934   36539 system_pods.go:61] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:57:53.776937   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:57:53.776941   36539 system_pods.go:61] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:57:53.776946   36539 system_pods.go:61] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:57:53.776949   36539 system_pods.go:61] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:57:53.776954   36539 system_pods.go:61] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:57:53.776956   36539 system_pods.go:61] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:57:53.776962   36539 system_pods.go:61] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:57:53.776965   36539 system_pods.go:61] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:57:53.776972   36539 system_pods.go:74] duration metric: took 184.481749ms to wait for pod list to return data ...
	I1204 23:57:53.777015   36539 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:57:53.968124   36539 request.go:632] Waited for 191.035378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:57:53.968194   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:57:53.968199   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:53.968206   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:53.968213   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:53.971592   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:53.971802   36539 default_sa.go:45] found service account: "default"
	I1204 23:57:53.971821   36539 default_sa.go:55] duration metric: took 194.795603ms for default service account to be created ...
	I1204 23:57:53.971829   36539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:57:54.168141   36539 request.go:632] Waited for 196.235523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:54.168201   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:57:54.168209   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:54.168224   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:54.168233   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:54.172658   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:57:54.176765   36539 system_pods.go:86] 17 kube-system pods found
	I1204 23:57:54.176792   36539 system_pods.go:89] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:57:54.176798   36539 system_pods.go:89] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:57:54.176803   36539 system_pods.go:89] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:57:54.176806   36539 system_pods.go:89] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:57:54.176809   36539 system_pods.go:89] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:57:54.176813   36539 system_pods.go:89] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:57:54.176816   36539 system_pods.go:89] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:57:54.176819   36539 system_pods.go:89] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:57:54.176823   36539 system_pods.go:89] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:57:54.176835   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:57:54.176843   36539 system_pods.go:89] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:57:54.176846   36539 system_pods.go:89] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:57:54.176850   36539 system_pods.go:89] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:57:54.176853   36539 system_pods.go:89] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:57:54.176856   36539 system_pods.go:89] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:57:54.176859   36539 system_pods.go:89] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:57:54.176862   36539 system_pods.go:89] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:57:54.176867   36539 system_pods.go:126] duration metric: took 205.031013ms to wait for k8s-apps to be running ...
	I1204 23:57:54.176876   36539 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:57:54.176915   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:57:54.192123   36539 system_svc.go:56] duration metric: took 15.235714ms WaitForService to wait for kubelet
	I1204 23:57:54.192153   36539 kubeadm.go:582] duration metric: took 22.684789622s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:57:54.192178   36539 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:57:54.368510   36539 request.go:632] Waited for 176.251249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1204 23:57:54.368569   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1204 23:57:54.368574   36539 round_trippers.go:469] Request Headers:
	I1204 23:57:54.368581   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:57:54.368586   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:57:54.371935   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:57:54.372940   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:57:54.372962   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:57:54.372973   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:57:54.372977   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:57:54.372982   36539 node_conditions.go:105] duration metric: took 180.798569ms to run NodePressure ...
	I1204 23:57:54.372992   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:57:54.373014   36539 start.go:255] writing updated cluster config ...
	I1204 23:57:54.375210   36539 out.go:201] 
	I1204 23:57:54.376552   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:57:54.376634   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:54.378003   36539 out.go:177] * Starting "ha-168758-m03" control-plane node in "ha-168758" cluster
	I1204 23:57:54.379143   36539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:57:54.379161   36539 cache.go:56] Caching tarball of preloaded images
	I1204 23:57:54.379243   36539 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:57:54.379255   36539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:57:54.379336   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:57:54.379486   36539 start.go:360] acquireMachinesLock for ha-168758-m03: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 23:57:54.379524   36539 start.go:364] duration metric: took 21.433µs to acquireMachinesLock for "ha-168758-m03"
	I1204 23:57:54.379541   36539 start.go:93] Provisioning new machine with config: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:57:54.379640   36539 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1204 23:57:54.382003   36539 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 23:57:54.382100   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:57:54.382136   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:57:54.396586   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I1204 23:57:54.397026   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:57:54.397468   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:57:54.397491   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:57:54.397753   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:57:54.397933   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:57:54.398074   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:57:54.398206   36539 start.go:159] libmachine.API.Create for "ha-168758" (driver="kvm2")
	I1204 23:57:54.398237   36539 client.go:168] LocalClient.Create starting
	I1204 23:57:54.398271   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1204 23:57:54.398314   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:57:54.398335   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:57:54.398396   36539 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1204 23:57:54.398425   36539 main.go:141] libmachine: Decoding PEM data...
	I1204 23:57:54.398442   36539 main.go:141] libmachine: Parsing certificate...
	I1204 23:57:54.398467   36539 main.go:141] libmachine: Running pre-create checks...
	I1204 23:57:54.398479   36539 main.go:141] libmachine: (ha-168758-m03) Calling .PreCreateCheck
	I1204 23:57:54.398600   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:57:54.398924   36539 main.go:141] libmachine: Creating machine...
	I1204 23:57:54.398936   36539 main.go:141] libmachine: (ha-168758-m03) Calling .Create
	I1204 23:57:54.399103   36539 main.go:141] libmachine: (ha-168758-m03) Creating KVM machine...
	I1204 23:57:54.400268   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found existing default KVM network
	I1204 23:57:54.400380   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found existing private KVM network mk-ha-168758
	I1204 23:57:54.400501   36539 main.go:141] libmachine: (ha-168758-m03) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 ...
	I1204 23:57:54.400521   36539 main.go:141] libmachine: (ha-168758-m03) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:57:54.400579   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.400496   37304 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:57:54.400687   36539 main.go:141] libmachine: (ha-168758-m03) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 23:57:54.638854   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.638724   37304 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa...
	I1204 23:57:54.688376   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.688260   37304 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/ha-168758-m03.rawdisk...
	I1204 23:57:54.688404   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Writing magic tar header
	I1204 23:57:54.688416   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Writing SSH key tar header
	I1204 23:57:54.688424   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:54.688388   37304 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 ...
	I1204 23:57:54.688510   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03
	I1204 23:57:54.688554   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03 (perms=drwx------)
	I1204 23:57:54.688585   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1204 23:57:54.688596   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1204 23:57:54.688610   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:57:54.688624   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1204 23:57:54.688642   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1204 23:57:54.688652   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1204 23:57:54.688658   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1204 23:57:54.688668   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1204 23:57:54.688679   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home/jenkins
	I1204 23:57:54.688694   36539 main.go:141] libmachine: (ha-168758-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1204 23:57:54.688711   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Checking permissions on dir: /home
	I1204 23:57:54.688721   36539 main.go:141] libmachine: (ha-168758-m03) Creating domain...
	I1204 23:57:54.688733   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Skipping /home - not owner
	I1204 23:57:54.689655   36539 main.go:141] libmachine: (ha-168758-m03) define libvirt domain using xml: 
	I1204 23:57:54.689672   36539 main.go:141] libmachine: (ha-168758-m03) <domain type='kvm'>
	I1204 23:57:54.689678   36539 main.go:141] libmachine: (ha-168758-m03)   <name>ha-168758-m03</name>
	I1204 23:57:54.689682   36539 main.go:141] libmachine: (ha-168758-m03)   <memory unit='MiB'>2200</memory>
	I1204 23:57:54.689687   36539 main.go:141] libmachine: (ha-168758-m03)   <vcpu>2</vcpu>
	I1204 23:57:54.689691   36539 main.go:141] libmachine: (ha-168758-m03)   <features>
	I1204 23:57:54.689715   36539 main.go:141] libmachine: (ha-168758-m03)     <acpi/>
	I1204 23:57:54.689732   36539 main.go:141] libmachine: (ha-168758-m03)     <apic/>
	I1204 23:57:54.689738   36539 main.go:141] libmachine: (ha-168758-m03)     <pae/>
	I1204 23:57:54.689742   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.689747   36539 main.go:141] libmachine: (ha-168758-m03)   </features>
	I1204 23:57:54.689752   36539 main.go:141] libmachine: (ha-168758-m03)   <cpu mode='host-passthrough'>
	I1204 23:57:54.689758   36539 main.go:141] libmachine: (ha-168758-m03)   
	I1204 23:57:54.689763   36539 main.go:141] libmachine: (ha-168758-m03)   </cpu>
	I1204 23:57:54.689771   36539 main.go:141] libmachine: (ha-168758-m03)   <os>
	I1204 23:57:54.689775   36539 main.go:141] libmachine: (ha-168758-m03)     <type>hvm</type>
	I1204 23:57:54.689783   36539 main.go:141] libmachine: (ha-168758-m03)     <boot dev='cdrom'/>
	I1204 23:57:54.689786   36539 main.go:141] libmachine: (ha-168758-m03)     <boot dev='hd'/>
	I1204 23:57:54.689799   36539 main.go:141] libmachine: (ha-168758-m03)     <bootmenu enable='no'/>
	I1204 23:57:54.689805   36539 main.go:141] libmachine: (ha-168758-m03)   </os>
	I1204 23:57:54.689812   36539 main.go:141] libmachine: (ha-168758-m03)   <devices>
	I1204 23:57:54.689817   36539 main.go:141] libmachine: (ha-168758-m03)     <disk type='file' device='cdrom'>
	I1204 23:57:54.689828   36539 main.go:141] libmachine: (ha-168758-m03)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/boot2docker.iso'/>
	I1204 23:57:54.689833   36539 main.go:141] libmachine: (ha-168758-m03)       <target dev='hdc' bus='scsi'/>
	I1204 23:57:54.689840   36539 main.go:141] libmachine: (ha-168758-m03)       <readonly/>
	I1204 23:57:54.689844   36539 main.go:141] libmachine: (ha-168758-m03)     </disk>
	I1204 23:57:54.689852   36539 main.go:141] libmachine: (ha-168758-m03)     <disk type='file' device='disk'>
	I1204 23:57:54.689857   36539 main.go:141] libmachine: (ha-168758-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1204 23:57:54.689867   36539 main.go:141] libmachine: (ha-168758-m03)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/ha-168758-m03.rawdisk'/>
	I1204 23:57:54.689875   36539 main.go:141] libmachine: (ha-168758-m03)       <target dev='hda' bus='virtio'/>
	I1204 23:57:54.689882   36539 main.go:141] libmachine: (ha-168758-m03)     </disk>
	I1204 23:57:54.689886   36539 main.go:141] libmachine: (ha-168758-m03)     <interface type='network'>
	I1204 23:57:54.689891   36539 main.go:141] libmachine: (ha-168758-m03)       <source network='mk-ha-168758'/>
	I1204 23:57:54.689898   36539 main.go:141] libmachine: (ha-168758-m03)       <model type='virtio'/>
	I1204 23:57:54.689903   36539 main.go:141] libmachine: (ha-168758-m03)     </interface>
	I1204 23:57:54.689910   36539 main.go:141] libmachine: (ha-168758-m03)     <interface type='network'>
	I1204 23:57:54.689917   36539 main.go:141] libmachine: (ha-168758-m03)       <source network='default'/>
	I1204 23:57:54.689924   36539 main.go:141] libmachine: (ha-168758-m03)       <model type='virtio'/>
	I1204 23:57:54.689928   36539 main.go:141] libmachine: (ha-168758-m03)     </interface>
	I1204 23:57:54.689935   36539 main.go:141] libmachine: (ha-168758-m03)     <serial type='pty'>
	I1204 23:57:54.689957   36539 main.go:141] libmachine: (ha-168758-m03)       <target port='0'/>
	I1204 23:57:54.689973   36539 main.go:141] libmachine: (ha-168758-m03)     </serial>
	I1204 23:57:54.689983   36539 main.go:141] libmachine: (ha-168758-m03)     <console type='pty'>
	I1204 23:57:54.689987   36539 main.go:141] libmachine: (ha-168758-m03)       <target type='serial' port='0'/>
	I1204 23:57:54.689993   36539 main.go:141] libmachine: (ha-168758-m03)     </console>
	I1204 23:57:54.689997   36539 main.go:141] libmachine: (ha-168758-m03)     <rng model='virtio'>
	I1204 23:57:54.690005   36539 main.go:141] libmachine: (ha-168758-m03)       <backend model='random'>/dev/random</backend>
	I1204 23:57:54.690009   36539 main.go:141] libmachine: (ha-168758-m03)     </rng>
	I1204 23:57:54.690013   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.690016   36539 main.go:141] libmachine: (ha-168758-m03)     
	I1204 23:57:54.690021   36539 main.go:141] libmachine: (ha-168758-m03)   </devices>
	I1204 23:57:54.690026   36539 main.go:141] libmachine: (ha-168758-m03) </domain>
	I1204 23:57:54.690035   36539 main.go:141] libmachine: (ha-168758-m03) 
	I1204 23:57:54.696711   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:45:00:1a in network default
	I1204 23:57:54.697413   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:54.697439   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring networks are active...
	I1204 23:57:54.698233   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring network default is active
	I1204 23:57:54.698580   36539 main.go:141] libmachine: (ha-168758-m03) Ensuring network mk-ha-168758 is active
	I1204 23:57:54.698931   36539 main.go:141] libmachine: (ha-168758-m03) Getting domain xml...
	I1204 23:57:54.699649   36539 main.go:141] libmachine: (ha-168758-m03) Creating domain...
	I1204 23:57:55.920334   36539 main.go:141] libmachine: (ha-168758-m03) Waiting to get IP...
	I1204 23:57:55.921079   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:55.921466   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:55.921501   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:55.921456   37304 retry.go:31] will retry after 209.695163ms: waiting for machine to come up
	I1204 23:57:56.132761   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.133191   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.133212   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.133145   37304 retry.go:31] will retry after 335.842051ms: waiting for machine to come up
	I1204 23:57:56.470639   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.471086   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.471135   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.471032   37304 retry.go:31] will retry after 296.078556ms: waiting for machine to come up
	I1204 23:57:56.768275   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:56.768757   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:56.768777   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:56.768709   37304 retry.go:31] will retry after 507.499948ms: waiting for machine to come up
	I1204 23:57:57.277223   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:57.277662   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:57.277692   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:57.277613   37304 retry.go:31] will retry after 706.137269ms: waiting for machine to come up
	I1204 23:57:57.985335   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:57.985740   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:57.985763   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:57.985694   37304 retry.go:31] will retry after 901.171563ms: waiting for machine to come up
	I1204 23:57:58.888734   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:58.889195   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:58.889217   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:58.889148   37304 retry.go:31] will retry after 1.040272258s: waiting for machine to come up
	I1204 23:57:59.931201   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:57:59.931634   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:57:59.931662   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:57:59.931580   37304 retry.go:31] will retry after 1.47549599s: waiting for machine to come up
	I1204 23:58:01.408702   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:01.409147   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:01.409198   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:01.409117   37304 retry.go:31] will retry after 1.529862151s: waiting for machine to come up
	I1204 23:58:02.941039   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:02.941538   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:02.941562   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:02.941502   37304 retry.go:31] will retry after 1.818025113s: waiting for machine to come up
	I1204 23:58:04.760910   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:04.761392   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:04.761424   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:04.761351   37304 retry.go:31] will retry after 1.812557198s: waiting for machine to come up
	I1204 23:58:06.576024   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:06.576523   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:06.576553   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:06.576452   37304 retry.go:31] will retry after 3.417018451s: waiting for machine to come up
	I1204 23:58:09.994797   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:09.995228   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:09.995255   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:09.995175   37304 retry.go:31] will retry after 3.708395185s: waiting for machine to come up
	I1204 23:58:13.707953   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:13.708378   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find current IP address of domain ha-168758-m03 in network mk-ha-168758
	I1204 23:58:13.708407   36539 main.go:141] libmachine: (ha-168758-m03) DBG | I1204 23:58:13.708331   37304 retry.go:31] will retry after 5.536922009s: waiting for machine to come up
	I1204 23:58:19.251010   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.251548   36539 main.go:141] libmachine: (ha-168758-m03) Found IP for machine: 192.168.39.211
	I1204 23:58:19.251581   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has current primary IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.251590   36539 main.go:141] libmachine: (ha-168758-m03) Reserving static IP address...
	I1204 23:58:19.252021   36539 main.go:141] libmachine: (ha-168758-m03) DBG | unable to find host DHCP lease matching {name: "ha-168758-m03", mac: "52:54:00:25:7a:4c", ip: "192.168.39.211"} in network mk-ha-168758
	I1204 23:58:19.326562   36539 main.go:141] libmachine: (ha-168758-m03) Reserved static IP address: 192.168.39.211
	I1204 23:58:19.326591   36539 main.go:141] libmachine: (ha-168758-m03) Waiting for SSH to be available...
	I1204 23:58:19.326600   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Getting to WaitForSSH function...
	I1204 23:58:19.329626   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.330027   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.330055   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.330234   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using SSH client type: external
	I1204 23:58:19.330259   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa (-rw-------)
	I1204 23:58:19.330287   36539 main.go:141] libmachine: (ha-168758-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1204 23:58:19.330300   36539 main.go:141] libmachine: (ha-168758-m03) DBG | About to run SSH command:
	I1204 23:58:19.330316   36539 main.go:141] libmachine: (ha-168758-m03) DBG | exit 0
	I1204 23:58:19.458033   36539 main.go:141] libmachine: (ha-168758-m03) DBG | SSH cmd err, output: <nil>: 
	I1204 23:58:19.458296   36539 main.go:141] libmachine: (ha-168758-m03) KVM machine creation complete!
	I1204 23:58:19.458639   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:58:19.459211   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:19.459418   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:19.459604   36539 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1204 23:58:19.459621   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetState
	I1204 23:58:19.460815   36539 main.go:141] libmachine: Detecting operating system of created instance...
	I1204 23:58:19.460827   36539 main.go:141] libmachine: Waiting for SSH to be available...
	I1204 23:58:19.460832   36539 main.go:141] libmachine: Getting to WaitForSSH function...
	I1204 23:58:19.460837   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.463322   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.463682   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.463709   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.463819   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.463998   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.464138   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.464299   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.464466   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.464650   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.464661   36539 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1204 23:58:19.569758   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:19.569785   36539 main.go:141] libmachine: Detecting the provisioner...
	I1204 23:58:19.569794   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.573011   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.573416   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.573434   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.573531   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.573755   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.573966   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.574140   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.574339   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.574549   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.574565   36539 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1204 23:58:19.678634   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1204 23:58:19.678713   36539 main.go:141] libmachine: found compatible host: buildroot
	I1204 23:58:19.678731   36539 main.go:141] libmachine: Provisioning with buildroot...
	I1204 23:58:19.678746   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.679001   36539 buildroot.go:166] provisioning hostname "ha-168758-m03"
	I1204 23:58:19.679025   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.679187   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.681791   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.682166   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.682203   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.682354   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.682571   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.682718   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.682893   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.683080   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.683252   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.683265   36539 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758-m03 && echo "ha-168758-m03" | sudo tee /etc/hostname
	I1204 23:58:19.805397   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758-m03
	
	I1204 23:58:19.805429   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.808431   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.808856   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.808878   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.809012   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:19.809177   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.809288   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:19.809401   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:19.809650   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:19.809863   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:19.809887   36539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:58:19.923097   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:19.923127   36539 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1204 23:58:19.923146   36539 buildroot.go:174] setting up certificates
	I1204 23:58:19.923155   36539 provision.go:84] configureAuth start
	I1204 23:58:19.923166   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetMachineName
	I1204 23:58:19.923476   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:19.926158   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.926547   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.926572   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.926691   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:19.928951   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.929352   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:19.929383   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:19.929531   36539 provision.go:143] copyHostCerts
	I1204 23:58:19.929556   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:58:19.929583   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1204 23:58:19.929592   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1204 23:58:19.929653   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1204 23:58:19.929721   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:58:19.929747   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1204 23:58:19.929754   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1204 23:58:19.929777   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1204 23:58:19.929820   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:58:19.929837   36539 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1204 23:58:19.929844   36539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1204 23:58:19.929863   36539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1204 23:58:19.929908   36539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758-m03 san=[127.0.0.1 192.168.39.211 ha-168758-m03 localhost minikube]
	I1204 23:58:20.121053   36539 provision.go:177] copyRemoteCerts
	I1204 23:58:20.121105   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:58:20.121126   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.123934   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.124493   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.124528   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.124773   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.124964   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.125130   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.125289   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.208033   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 23:58:20.208112   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:58:20.233800   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 23:58:20.233885   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:58:20.258548   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 23:58:20.258616   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 23:58:20.282884   36539 provision.go:87] duration metric: took 359.716273ms to configureAuth
	I1204 23:58:20.282915   36539 buildroot.go:189] setting minikube options for container-runtime
	I1204 23:58:20.283121   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:20.283188   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.285836   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.286252   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.286287   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.286487   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.286665   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.286810   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.287008   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.287190   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:20.287357   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:20.287370   36539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:58:20.518772   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:58:20.518795   36539 main.go:141] libmachine: Checking connection to Docker...
	I1204 23:58:20.518802   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetURL
	I1204 23:58:20.520187   36539 main.go:141] libmachine: (ha-168758-m03) DBG | Using libvirt version 6000000
	I1204 23:58:20.522653   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.523105   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.523133   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.523291   36539 main.go:141] libmachine: Docker is up and running!
	I1204 23:58:20.523307   36539 main.go:141] libmachine: Reticulating splines...
	I1204 23:58:20.523315   36539 client.go:171] duration metric: took 26.125066489s to LocalClient.Create
	I1204 23:58:20.523341   36539 start.go:167] duration metric: took 26.125137032s to libmachine.API.Create "ha-168758"
	I1204 23:58:20.523357   36539 start.go:293] postStartSetup for "ha-168758-m03" (driver="kvm2")
	I1204 23:58:20.523373   36539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:58:20.523396   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.523640   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:58:20.523662   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.525716   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.526081   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.526107   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.526258   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.526431   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.526593   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.526718   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.611923   36539 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:58:20.616136   36539 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 23:58:20.616164   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1204 23:58:20.616222   36539 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1204 23:58:20.616301   36539 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1204 23:58:20.616310   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1204 23:58:20.616393   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:58:20.625581   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:58:20.649393   36539 start.go:296] duration metric: took 126.018678ms for postStartSetup
	I1204 23:58:20.649447   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetConfigRaw
	I1204 23:58:20.650084   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:20.652794   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.653173   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.653203   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.653515   36539 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1204 23:58:20.653706   36539 start.go:128] duration metric: took 26.274055056s to createHost
	I1204 23:58:20.653726   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.657371   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.657844   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.657871   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.658096   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.658293   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.658453   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.658591   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.658709   36539 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:20.658880   36539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1204 23:58:20.658893   36539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 23:58:20.762401   36539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733356700.741731450
	
	I1204 23:58:20.762429   36539 fix.go:216] guest clock: 1733356700.741731450
	I1204 23:58:20.762440   36539 fix.go:229] Guest: 2024-12-04 23:58:20.74173145 +0000 UTC Remote: 2024-12-04 23:58:20.653717113 +0000 UTC m=+145.998402889 (delta=88.014337ms)
	I1204 23:58:20.762459   36539 fix.go:200] guest clock delta is within tolerance: 88.014337ms
	I1204 23:58:20.762465   36539 start.go:83] releasing machines lock for "ha-168758-m03", held for 26.382931771s
	I1204 23:58:20.762482   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.762743   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:20.765725   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.766178   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.766206   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.769126   36539 out.go:177] * Found network options:
	I1204 23:58:20.770828   36539 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.70
	W1204 23:58:20.772463   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 23:58:20.772484   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:58:20.772496   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773077   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773256   36539 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1204 23:58:20.773351   36539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:58:20.773389   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	W1204 23:58:20.773530   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 23:58:20.773562   36539 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 23:58:20.773617   36539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:58:20.773705   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1204 23:58:20.776558   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.776588   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.776979   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.777009   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.777099   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:20.777132   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:20.777175   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.777373   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.777397   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1204 23:58:20.777602   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1204 23:58:20.777604   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.777735   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1204 23:58:20.777749   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:20.777829   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1204 23:58:21.014947   36539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 23:58:21.021071   36539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 23:58:21.021137   36539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:58:21.039078   36539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 23:58:21.039110   36539 start.go:495] detecting cgroup driver to use...
	I1204 23:58:21.039177   36539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:58:21.058712   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:58:21.073746   36539 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:58:21.073808   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:58:21.087957   36539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:58:21.101618   36539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:58:21.221628   36539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:58:21.378405   36539 docker.go:233] disabling docker service ...
	I1204 23:58:21.378477   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:58:21.394967   36539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:58:21.408722   36539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:58:21.562596   36539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:58:21.684473   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:58:21.698325   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:58:21.716302   36539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:58:21.716372   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.726130   36539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:58:21.726192   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.737846   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.748038   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.758637   36539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:58:21.769448   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.779664   36539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.796293   36539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:58:21.806603   36539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:58:21.816322   36539 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 23:58:21.816373   36539 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 23:58:21.829466   36539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:58:21.839041   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:21.954707   36539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:58:22.043532   36539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:58:22.043590   36539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:58:22.048235   36539 start.go:563] Will wait 60s for crictl version
	I1204 23:58:22.048287   36539 ssh_runner.go:195] Run: which crictl
	I1204 23:58:22.051918   36539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:58:22.091888   36539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1204 23:58:22.091959   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:58:22.119246   36539 ssh_runner.go:195] Run: crio --version
	I1204 23:58:22.147844   36539 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1204 23:58:22.149672   36539 out.go:177]   - env NO_PROXY=192.168.39.246
	I1204 23:58:22.151366   36539 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.70
	I1204 23:58:22.152919   36539 main.go:141] libmachine: (ha-168758-m03) Calling .GetIP
	I1204 23:58:22.155788   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:22.156132   36539 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1204 23:58:22.156156   36539 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1204 23:58:22.156371   36539 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1204 23:58:22.161001   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:22.172781   36539 mustload.go:65] Loading cluster: ha-168758
	I1204 23:58:22.173021   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:22.173277   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:22.173312   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:22.188201   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I1204 23:58:22.188638   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:22.189068   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:22.189086   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:22.189487   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:22.189677   36539 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1204 23:58:22.191201   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:58:22.191497   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:22.191540   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:22.206205   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I1204 23:58:22.206662   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:22.207121   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:22.207141   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:22.207423   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:22.207583   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:58:22.207723   36539 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.211
	I1204 23:58:22.207733   36539 certs.go:194] generating shared ca certs ...
	I1204 23:58:22.207746   36539 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.207850   36539 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1204 23:58:22.207885   36539 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1204 23:58:22.207894   36539 certs.go:256] generating profile certs ...
	I1204 23:58:22.207958   36539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1204 23:58:22.207980   36539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0
	I1204 23:58:22.207995   36539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.211 192.168.39.254]
	I1204 23:58:22.288746   36539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 ...
	I1204 23:58:22.288772   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0: {Name:mk226e78394d9a1d1481fe02645dfa0bdae47c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.288924   36539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0 ...
	I1204 23:58:22.288937   36539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0: {Name:mk254430c45779d3847a67c61786cc3c2c6ae80f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:22.289003   36539 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.6fb954c0 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1204 23:58:22.289131   36539 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.6fb954c0 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1204 23:58:22.289249   36539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1204 23:58:22.289263   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 23:58:22.289275   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 23:58:22.289288   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 23:58:22.289301   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 23:58:22.289314   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 23:58:22.289328   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 23:58:22.289344   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 23:58:22.306093   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 23:58:22.306200   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1204 23:58:22.306239   36539 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1204 23:58:22.306253   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 23:58:22.306298   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:58:22.306334   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:58:22.306370   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1204 23:58:22.306426   36539 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1204 23:58:22.306471   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.306493   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.306511   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.306552   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:58:22.309680   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:22.310101   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:58:22.310130   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:22.310297   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:58:22.310547   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:58:22.310740   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:58:22.310907   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:58:22.386340   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 23:58:22.392285   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 23:58:22.405695   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 23:58:22.409772   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1204 23:58:22.423743   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 23:58:22.427677   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 23:58:22.437721   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 23:58:22.441623   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1204 23:58:22.452543   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 23:58:22.461102   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 23:58:22.471271   36539 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 23:58:22.477234   36539 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1204 23:58:22.491216   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:58:22.516244   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:58:22.540200   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:58:22.565503   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:58:22.589637   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1204 23:58:22.614321   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 23:58:22.637487   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:58:22.661061   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 23:58:22.683528   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1204 23:58:22.706445   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1204 23:58:22.729085   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:58:22.750946   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 23:58:22.766659   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1204 23:58:22.782041   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 23:58:22.797608   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1204 23:58:22.812913   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 23:58:22.828464   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1204 23:58:22.843593   36539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 23:58:22.859636   36539 ssh_runner.go:195] Run: openssl version
	I1204 23:58:22.865420   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:58:22.876063   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.880415   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.880469   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:22.885958   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:58:22.896219   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1204 23:58:22.906770   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.910894   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.910951   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1204 23:58:22.916407   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1204 23:58:22.926735   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1204 23:58:22.937057   36539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.941137   36539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.941190   36539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1204 23:58:22.946692   36539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:58:22.959164   36539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:58:22.963452   36539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:58:22.963502   36539 kubeadm.go:934] updating node {m03 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1204 23:58:22.963582   36539 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:58:22.963624   36539 kube-vip.go:115] generating kube-vip config ...
	I1204 23:58:22.963666   36539 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 23:58:22.979996   36539 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 23:58:22.980050   36539 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 23:58:22.980122   36539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:58:22.989754   36539 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1204 23:58:22.989828   36539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1204 23:58:22.999376   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1204 23:58:22.999414   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1204 23:58:22.999429   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:58:22.999442   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:58:22.999460   36539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1204 23:58:22.999481   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:58:22.999484   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1204 23:58:22.999564   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1204 23:58:23.016751   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1204 23:58:23.016792   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1204 23:58:23.016813   36539 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:58:23.016873   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1204 23:58:23.016893   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1204 23:58:23.016908   36539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1204 23:58:23.044749   36539 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1204 23:58:23.044784   36539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1204 23:58:23.845384   36539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 23:58:23.856608   36539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1204 23:58:23.873561   36539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:58:23.888925   36539 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1204 23:58:23.904283   36539 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1204 23:58:23.908166   36539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:23.921626   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:24.054957   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:24.071310   36539 host.go:66] Checking if "ha-168758" exists ...
	I1204 23:58:24.071774   36539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:58:24.071826   36539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:58:24.087392   36539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I1204 23:58:24.087908   36539 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:58:24.088426   36539 main.go:141] libmachine: Using API Version  1
	I1204 23:58:24.088445   36539 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:58:24.088872   36539 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:58:24.089072   36539 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1204 23:58:24.089234   36539 start.go:317] joinCluster: &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:58:24.089406   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1204 23:58:24.089426   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1204 23:58:24.092795   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:24.093319   36539 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1204 23:58:24.093346   36539 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1204 23:58:24.093418   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1204 23:58:24.093603   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1204 23:58:24.093787   36539 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1204 23:58:24.093916   36539 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1204 23:58:24.257067   36539 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:58:24.257133   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8wphde.b7r2hfejhmn4frsm --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443"
	I1204 23:58:46.569250   36539 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8wphde.b7r2hfejhmn4frsm --discovery-token-ca-cert-hash sha256:6fce3e78f8ddb8bf962d2174e3df7e92cedc0d535dbc90cdc910ca26d6d87f8c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-168758-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443": (22.312080737s)
	I1204 23:58:46.569302   36539 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1204 23:58:47.113692   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-168758-m03 minikube.k8s.io/updated_at=2024_12_04T23_58_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=ha-168758 minikube.k8s.io/primary=false
	I1204 23:58:47.254473   36539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-168758-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1204 23:58:47.374583   36539 start.go:319] duration metric: took 23.285344776s to joinCluster
	I1204 23:58:47.374658   36539 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:58:47.375005   36539 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:58:47.375915   36539 out.go:177] * Verifying Kubernetes components...
	I1204 23:58:47.376822   36539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:47.645069   36539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:47.700465   36539 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:58:47.700774   36539 kapi.go:59] client config for ha-168758: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 23:58:47.700882   36539 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I1204 23:58:47.701229   36539 node_ready.go:35] waiting up to 6m0s for node "ha-168758-m03" to be "Ready" ...
	I1204 23:58:47.701326   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:47.701346   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:47.701361   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:47.701372   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:47.704946   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:48.202242   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:48.202269   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:48.202277   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:48.202281   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:48.205645   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:48.701516   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:48.701536   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:48.701543   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:48.701547   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:48.704919   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.202192   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:49.202215   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:49.202225   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:49.202231   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:49.205567   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.701894   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:49.701912   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:49.701920   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:49.701928   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:49.705415   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:49.706113   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:50.201921   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:50.201950   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:50.201958   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:50.201963   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:50.205192   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:50.701850   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:50.701878   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:50.701890   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:50.701897   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:50.705478   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:51.201426   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:51.201446   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:51.201457   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:51.201463   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:51.204423   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:51.702262   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:51.702285   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:51.702293   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:51.702297   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:51.706096   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:51.706649   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:52.201560   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:52.201578   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:52.201585   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:52.201590   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:52.204456   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:52.701709   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:52.701729   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:52.701737   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:52.701742   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:52.705302   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:53.201587   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:53.201627   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:53.201637   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:53.201642   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:53.206081   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:58:53.702110   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:53.702136   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:53.702146   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:53.702151   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:53.707592   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:58:53.708884   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:54.202266   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:54.202287   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:54.202299   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:54.202303   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:54.205388   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:54.702184   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:54.702205   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:54.702212   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:54.702217   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:54.705854   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:55.202090   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:55.202109   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:55.202117   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:55.202122   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:55.205278   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:55.702366   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:55.702386   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:55.702393   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:55.702398   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:55.705911   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:56.202018   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:56.202039   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:56.202048   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:56.202052   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:56.205205   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:56.205770   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:56.702046   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:56.702068   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:56.702077   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:56.702079   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:56.705883   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:57.201534   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:57.201553   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:57.201560   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:57.201565   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:57.204718   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:57.702166   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:57.702192   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:57.702207   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:57.702215   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:57.706100   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:58.202178   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:58.202197   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:58.202207   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:58.202213   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:58.205266   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:58.205989   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:58:58.702432   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:58.702453   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:58.702461   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:58.702465   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:58.706164   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:58:59.201677   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:59.201698   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:59.201706   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:59.201710   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:59.204712   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:58:59.702080   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:58:59.702106   36539 round_trippers.go:469] Request Headers:
	I1204 23:58:59.702114   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:58:59.702118   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:58:59.705833   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:00.202377   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:00.202398   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:00.202405   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:00.202410   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:00.204828   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:00.701624   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:00.701644   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:00.701652   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:00.701655   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:00.704805   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:00.705308   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:01.201668   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:01.201694   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:01.201704   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:01.201710   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:01.205149   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:01.702232   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:01.702261   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:01.702269   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:01.702275   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:01.706624   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:02.201616   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:02.201640   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:02.201652   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:02.201658   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:02.213640   36539 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1204 23:59:02.701482   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:02.701500   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:02.701507   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:02.701511   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:02.707401   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:59:02.707932   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:03.202348   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:03.202370   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:03.202382   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:03.202386   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:03.206086   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:03.702128   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:03.702153   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:03.702167   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:03.702176   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:03.705764   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:04.202048   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:04.202071   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:04.202079   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:04.202083   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:04.205585   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:04.702350   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:04.702371   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:04.702379   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:04.702383   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:04.706391   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.201695   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:05.201717   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.201724   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.201728   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.206270   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:05.207294   36539 node_ready.go:53] node "ha-168758-m03" has status "Ready":"False"
	I1204 23:59:05.701694   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:05.701716   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.701727   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.701733   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.705389   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.706154   36539 node_ready.go:49] node "ha-168758-m03" has status "Ready":"True"
	I1204 23:59:05.706172   36539 node_ready.go:38] duration metric: took 18.004918776s for node "ha-168758-m03" to be "Ready" ...
	I1204 23:59:05.706181   36539 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:05.706250   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:05.706260   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.706266   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.706270   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.713324   36539 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 23:59:05.720724   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.720806   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rdbhr
	I1204 23:59:05.720815   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.720829   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.720837   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.724822   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.725428   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.725443   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.725451   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.725455   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.728496   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.729387   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.729410   36539 pod_ready.go:82] duration metric: took 8.658373ms for pod "coredns-7c65d6cfc9-rdbhr" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.729423   36539 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.729491   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgfvm
	I1204 23:59:05.729504   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.729516   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.729523   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.732217   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.732771   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.732784   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.732791   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.732795   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.735517   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.736112   36539 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.736127   36539 pod_ready.go:82] duration metric: took 6.697955ms for pod "coredns-7c65d6cfc9-zgfvm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.736135   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.736189   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758
	I1204 23:59:05.736199   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.736206   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.736209   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.739373   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:05.740248   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:05.740263   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.740269   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.740273   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.742492   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.742913   36539 pod_ready.go:93] pod "etcd-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.742932   36539 pod_ready.go:82] duration metric: took 6.791875ms for pod "etcd-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.742940   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.742986   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m02
	I1204 23:59:05.742994   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.743000   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.743004   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.745506   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.746152   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:05.746167   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.746173   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.746177   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.748564   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:05.749064   36539 pod_ready.go:93] pod "etcd-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:05.749082   36539 pod_ready.go:82] duration metric: took 6.135218ms for pod "etcd-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.749094   36539 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:05.902481   36539 request.go:632] Waited for 153.318139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m03
	I1204 23:59:05.902555   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-168758-m03
	I1204 23:59:05.902560   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:05.902568   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:05.902576   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:05.906398   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.102389   36539 request.go:632] Waited for 195.399546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:06.102443   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:06.102448   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.102476   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.102487   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.105971   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.106400   36539 pod_ready.go:93] pod "etcd-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.106416   36539 pod_ready.go:82] duration metric: took 357.316023ms for pod "etcd-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.106432   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.302535   36539 request.go:632] Waited for 196.013383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:59:06.302597   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758
	I1204 23:59:06.302608   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.302620   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.302626   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.305529   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:06.502522   36539 request.go:632] Waited for 196.356899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:06.502576   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:06.502581   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.502590   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.502598   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.507875   36539 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 23:59:06.508456   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.508479   36539 pod_ready.go:82] duration metric: took 402.037453ms for pod "kube-apiserver-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.508493   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.702449   36539 request.go:632] Waited for 193.867804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:59:06.702503   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m02
	I1204 23:59:06.702509   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.702516   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.702521   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.706330   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.902289   36539 request.go:632] Waited for 195.373137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:06.902347   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:06.902355   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:06.902371   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:06.902380   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:06.905544   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:06.906052   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:06.906069   36539 pod_ready.go:82] duration metric: took 397.563799ms for pod "kube-apiserver-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:06.906080   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.102210   36539 request.go:632] Waited for 196.065026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m03
	I1204 23:59:07.102278   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-168758-m03
	I1204 23:59:07.102285   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.102298   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.102311   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.105396   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.302017   36539 request.go:632] Waited for 195.70193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:07.302103   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:07.302115   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.302147   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.302157   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.307086   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:07.307589   36539 pod_ready.go:93] pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:07.307617   36539 pod_ready.go:82] duration metric: took 401.528153ms for pod "kube-apiserver-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.307631   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.502581   36539 request.go:632] Waited for 194.865138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:59:07.502634   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758
	I1204 23:59:07.502639   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.502646   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.502652   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.506125   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.702202   36539 request.go:632] Waited for 195.372404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:07.702281   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:07.702288   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.702295   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.702302   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.705978   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:07.706658   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:07.706678   36539 pod_ready.go:82] duration metric: took 399.03822ms for pod "kube-controller-manager-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.706694   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:07.901924   36539 request.go:632] Waited for 195.154365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:59:07.902019   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m02
	I1204 23:59:07.902029   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:07.902036   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:07.902039   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:07.905257   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.102494   36539 request.go:632] Waited for 196.34114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:08.102561   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:08.102566   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.102574   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.102581   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.106643   36539 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 23:59:08.107401   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.107420   36539 pod_ready.go:82] duration metric: took 400.717315ms for pod "kube-controller-manager-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.107434   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.302582   36539 request.go:632] Waited for 195.013676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m03
	I1204 23:59:08.302656   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-168758-m03
	I1204 23:59:08.302666   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.302679   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.302688   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.308848   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:08.501837   36539 request.go:632] Waited for 192.289915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.501891   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.501918   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.501932   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.501962   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.504980   36539 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 23:59:08.505549   36539 pod_ready.go:93] pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.505567   36539 pod_ready.go:82] duration metric: took 398.125286ms for pod "kube-controller-manager-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.505577   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9hxpd" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.702722   36539 request.go:632] Waited for 197.087803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hxpd
	I1204 23:59:08.702788   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hxpd
	I1204 23:59:08.702797   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.702805   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.702809   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.706146   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.902503   36539 request.go:632] Waited for 195.802416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.902574   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:08.902581   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:08.902588   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:08.902593   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:08.905815   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:08.906362   36539 pod_ready.go:93] pod "kube-proxy-9hxpd" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:08.906385   36539 pod_ready.go:82] duration metric: took 400.800234ms for pod "kube-proxy-9hxpd" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:08.906398   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.101673   36539 request.go:632] Waited for 195.212044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:59:09.101755   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s57v4
	I1204 23:59:09.101767   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.101780   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.101787   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.105084   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.302204   36539 request.go:632] Waited for 196.371294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:09.302277   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:09.302283   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.302290   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.302295   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.305317   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.306001   36539 pod_ready.go:93] pod "kube-proxy-s57v4" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:09.306020   36539 pod_ready.go:82] duration metric: took 399.614816ms for pod "kube-proxy-s57v4" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.306032   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.502424   36539 request.go:632] Waited for 196.313347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:59:09.502475   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjhjm
	I1204 23:59:09.502483   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.502494   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.502501   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.505899   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.702597   36539 request.go:632] Waited for 196.046362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:09.702661   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:09.702667   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.702675   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.702679   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.706005   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:09.706501   36539 pod_ready.go:93] pod "kube-proxy-vjhjm" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:09.706518   36539 pod_ready.go:82] duration metric: took 400.479582ms for pod "kube-proxy-vjhjm" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.706528   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:09.902149   36539 request.go:632] Waited for 195.548343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:59:09.902203   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758
	I1204 23:59:09.902208   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:09.902215   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:09.902218   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:09.905375   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.102485   36539 request.go:632] Waited for 196.29891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:10.102549   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758
	I1204 23:59:10.102556   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.102567   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.102576   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.105620   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.106279   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.106297   36539 pod_ready.go:82] duration metric: took 399.763414ms for pod "kube-scheduler-ha-168758" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.106307   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.302480   36539 request.go:632] Waited for 196.111079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:59:10.302561   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m02
	I1204 23:59:10.302570   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.302581   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.302590   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.306007   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.501852   36539 request.go:632] Waited for 195.277553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:10.501917   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m02
	I1204 23:59:10.501924   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.501957   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.501978   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.505309   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.505892   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.505913   36539 pod_ready.go:82] duration metric: took 399.598555ms for pod "kube-scheduler-ha-168758-m02" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.505927   36539 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.702021   36539 request.go:632] Waited for 195.988022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m03
	I1204 23:59:10.702085   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-168758-m03
	I1204 23:59:10.702093   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.702104   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.702111   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.705373   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.902383   36539 request.go:632] Waited for 196.36776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:10.902455   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-168758-m03
	I1204 23:59:10.902463   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.902473   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.902487   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.905908   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:10.906376   36539 pod_ready.go:93] pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:10.906394   36539 pod_ready.go:82] duration metric: took 400.458551ms for pod "kube-scheduler-ha-168758-m03" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:10.906408   36539 pod_ready.go:39] duration metric: took 5.200215763s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:10.906427   36539 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:59:10.906489   36539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:59:10.925438   36539 api_server.go:72] duration metric: took 23.550742485s to wait for apiserver process to appear ...
	I1204 23:59:10.925468   36539 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:59:10.925490   36539 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1204 23:59:10.929840   36539 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1204 23:59:10.929894   36539 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1204 23:59:10.929901   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:10.929909   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:10.929916   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:10.930671   36539 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 23:59:10.930740   36539 api_server.go:141] control plane version: v1.31.2
	I1204 23:59:10.930758   36539 api_server.go:131] duration metric: took 5.282883ms to wait for apiserver health ...
	I1204 23:59:10.930771   36539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:59:11.102180   36539 request.go:632] Waited for 171.329772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.102245   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.102253   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.102264   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.102275   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.108643   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:11.116294   36539 system_pods.go:59] 24 kube-system pods found
	I1204 23:59:11.116322   36539 system_pods.go:61] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:59:11.116329   36539 system_pods.go:61] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:59:11.116335   36539 system_pods.go:61] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:59:11.116341   36539 system_pods.go:61] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:59:11.116346   36539 system_pods.go:61] "etcd-ha-168758-m03" [9912285e-4d34-4848-9cd7-ebfe1e2fc6b6] Running
	I1204 23:59:11.116350   36539 system_pods.go:61] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:59:11.116356   36539 system_pods.go:61] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:59:11.116361   36539 system_pods.go:61] "kindnet-qzjm2" [8cb0cd7b-1a4f-42f3-ad4b-2bb3f353b1ff] Running
	I1204 23:59:11.116367   36539 system_pods.go:61] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:59:11.116375   36539 system_pods.go:61] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:59:11.116381   36539 system_pods.go:61] "kube-apiserver-ha-168758-m03" [d38eb205-d9e9-422e-9f0a-92b5e27fcf06] Running
	I1204 23:59:11.116388   36539 system_pods.go:61] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:59:11.116399   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:59:11.116406   36539 system_pods.go:61] "kube-controller-manager-ha-168758-m03" [6701fec1-6fd3-4ec0-a269-c55b53c7f17e] Running
	I1204 23:59:11.116415   36539 system_pods.go:61] "kube-proxy-9hxpd" [61053eb8-4fdb-48be-84a1-bb21e54ecfa5] Running
	I1204 23:59:11.116421   36539 system_pods.go:61] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:59:11.116429   36539 system_pods.go:61] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:59:11.116435   36539 system_pods.go:61] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:59:11.116444   36539 system_pods.go:61] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:59:11.116450   36539 system_pods.go:61] "kube-scheduler-ha-168758-m03" [3e543bde-2331-4e7d-8301-d9d2379d3670] Running
	I1204 23:59:11.116458   36539 system_pods.go:61] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:59:11.116464   36539 system_pods.go:61] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:59:11.116472   36539 system_pods.go:61] "kube-vip-ha-168758-m03" [2ca9e0a2-ef37-4420-b138-1093dd69e0d3] Running
	I1204 23:59:11.116480   36539 system_pods.go:61] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:59:11.116490   36539 system_pods.go:74] duration metric: took 185.709136ms to wait for pod list to return data ...
	I1204 23:59:11.116504   36539 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:59:11.302342   36539 request.go:632] Waited for 185.765456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:59:11.302393   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1204 23:59:11.302399   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.302405   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.302409   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.305795   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:11.305915   36539 default_sa.go:45] found service account: "default"
	I1204 23:59:11.305932   36539 default_sa.go:55] duration metric: took 189.416579ms for default service account to be created ...
	I1204 23:59:11.305962   36539 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:59:11.502392   36539 request.go:632] Waited for 196.360375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.502441   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1204 23:59:11.502446   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.502454   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.502457   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.509349   36539 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 23:59:11.515542   36539 system_pods.go:86] 24 kube-system pods found
	I1204 23:59:11.515567   36539 system_pods.go:89] "coredns-7c65d6cfc9-rdbhr" [84a1588a-ae3a-4cf6-b483-eeee3e386743] Running
	I1204 23:59:11.515572   36539 system_pods.go:89] "coredns-7c65d6cfc9-zgfvm" [aa3c3a1a-7105-4d25-af5f-638f76d5dc3b] Running
	I1204 23:59:11.515576   36539 system_pods.go:89] "etcd-ha-168758" [89c0590b-f720-489f-92de-114290858327] Running
	I1204 23:59:11.515580   36539 system_pods.go:89] "etcd-ha-168758-m02" [b2ffb788-429a-4583-a4ee-44eaec1a80fe] Running
	I1204 23:59:11.515584   36539 system_pods.go:89] "etcd-ha-168758-m03" [9912285e-4d34-4848-9cd7-ebfe1e2fc6b6] Running
	I1204 23:59:11.515587   36539 system_pods.go:89] "kindnet-9l7jr" [b4b65009-de28-48a8-afc3-66d8a0aaf8a8] Running
	I1204 23:59:11.515592   36539 system_pods.go:89] "kindnet-f4s96" [4aeae109-93c9-4b84-9785-060d2661fd02] Running
	I1204 23:59:11.515595   36539 system_pods.go:89] "kindnet-qzjm2" [8cb0cd7b-1a4f-42f3-ad4b-2bb3f353b1ff] Running
	I1204 23:59:11.515599   36539 system_pods.go:89] "kube-apiserver-ha-168758" [225501e2-cdd2-4374-b777-6ecdb1458147] Running
	I1204 23:59:11.515603   36539 system_pods.go:89] "kube-apiserver-ha-168758-m02" [3b201803-8495-4c4a-9da8-e525a864160b] Running
	I1204 23:59:11.515607   36539 system_pods.go:89] "kube-apiserver-ha-168758-m03" [d38eb205-d9e9-422e-9f0a-92b5e27fcf06] Running
	I1204 23:59:11.515611   36539 system_pods.go:89] "kube-controller-manager-ha-168758" [618897f9-31e5-400f-b3a6-6e9ff7866534] Running
	I1204 23:59:11.515615   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m02" [2f3367f5-6e4d-4de9-a830-2aeac8b1bb66] Running
	I1204 23:59:11.515620   36539 system_pods.go:89] "kube-controller-manager-ha-168758-m03" [6701fec1-6fd3-4ec0-a269-c55b53c7f17e] Running
	I1204 23:59:11.515624   36539 system_pods.go:89] "kube-proxy-9hxpd" [61053eb8-4fdb-48be-84a1-bb21e54ecfa5] Running
	I1204 23:59:11.515627   36539 system_pods.go:89] "kube-proxy-s57v4" [f6bed7b0-ce04-4af2-840e-5da3f373ca6c] Running
	I1204 23:59:11.515630   36539 system_pods.go:89] "kube-proxy-vjhjm" [e00e9325-7046-41b0-ade6-8cac8362e399] Running
	I1204 23:59:11.515633   36539 system_pods.go:89] "kube-scheduler-ha-168758" [d7bf5018-a019-4b95-9904-f3d2980de5e5] Running
	I1204 23:59:11.515636   36539 system_pods.go:89] "kube-scheduler-ha-168758-m02" [fd73b78e-0f52-420e-aa28-e8843adb00ce] Running
	I1204 23:59:11.515639   36539 system_pods.go:89] "kube-scheduler-ha-168758-m03" [3e543bde-2331-4e7d-8301-d9d2379d3670] Running
	I1204 23:59:11.515642   36539 system_pods.go:89] "kube-vip-ha-168758" [5b18a29e-bb38-4ee7-b712-1d39846bf8ae] Running
	I1204 23:59:11.515645   36539 system_pods.go:89] "kube-vip-ha-168758-m02" [97ee69ff-e238-4227-84fc-d9573fc81dd0] Running
	I1204 23:59:11.515648   36539 system_pods.go:89] "kube-vip-ha-168758-m03" [2ca9e0a2-ef37-4420-b138-1093dd69e0d3] Running
	I1204 23:59:11.515654   36539 system_pods.go:89] "storage-provisioner" [8e5c35c5-c5c5-41e6-9452-a3e8e0525b03] Running
	I1204 23:59:11.515660   36539 system_pods.go:126] duration metric: took 209.689107ms to wait for k8s-apps to be running ...
	I1204 23:59:11.515669   36539 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:59:11.515708   36539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:59:11.532158   36539 system_svc.go:56] duration metric: took 16.479991ms WaitForService to wait for kubelet
	I1204 23:59:11.532192   36539 kubeadm.go:582] duration metric: took 24.157502148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:59:11.532215   36539 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:59:11.702618   36539 request.go:632] Waited for 170.317658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1204 23:59:11.702682   36539 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1204 23:59:11.702690   36539 round_trippers.go:469] Request Headers:
	I1204 23:59:11.702701   36539 round_trippers.go:473]     Accept: application/json, */*
	I1204 23:59:11.702708   36539 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1204 23:59:11.706495   36539 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 23:59:11.707279   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707300   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707313   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707318   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707323   36539 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 23:59:11.707329   36539 node_conditions.go:123] node cpu capacity is 2
	I1204 23:59:11.707335   36539 node_conditions.go:105] duration metric: took 175.114412ms to run NodePressure ...
	I1204 23:59:11.707353   36539 start.go:241] waiting for startup goroutines ...
	I1204 23:59:11.707385   36539 start.go:255] writing updated cluster config ...
	I1204 23:59:11.707662   36539 ssh_runner.go:195] Run: rm -f paused
	I1204 23:59:11.758613   36539 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 23:59:11.760643   36539 out.go:177] * Done! kubectl is now configured to use "ha-168758" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.495397411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356985495372134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d81866e2-e662-42b6-8042-b094dd1380f8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.496530215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7e1b958-4a9a-46b4-ac91-2db2df347dda name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.496679617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7e1b958-4a9a-46b4-ac91-2db2df347dda name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.497256529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7e1b958-4a9a-46b4-ac91-2db2df347dda name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.533784516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50b1a483-ed3e-4443-bbf6-3c4989f029c1 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.533882687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50b1a483-ed3e-4443-bbf6-3c4989f029c1 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.535670592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66050993-b069-4095-8fab-e019e9861a3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.536230446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356985536206573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66050993-b069-4095-8fab-e019e9861a3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.536674261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff2e33d1-6ce5-4ba8-9f2f-cc96752cf1d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.536735289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff2e33d1-6ce5-4ba8-9f2f-cc96752cf1d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.537075479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff2e33d1-6ce5-4ba8-9f2f-cc96752cf1d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.574607131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5307739d-5ce5-47c6-aa64-f95d4f9f25a6 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.574696057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5307739d-5ce5-47c6-aa64-f95d4f9f25a6 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.575892991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c69760e9-61e8-40e0-a7e2-b04aff72c864 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.576468710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356985576443895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c69760e9-61e8-40e0-a7e2-b04aff72c864 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.577172109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f49f5074-54e8-4dff-88cd-95a92eeaf8b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.577235117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f49f5074-54e8-4dff-88cd-95a92eeaf8b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.577562216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f49f5074-54e8-4dff-88cd-95a92eeaf8b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.613447136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=823fcb2d-1e58-4515-a8bc-25dbe725bf81 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.613519689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=823fcb2d-1e58-4515-a8bc-25dbe725bf81 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.614397605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a11cc5a-beac-4b6f-9d07-6e10dd9a8300 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.614852317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356985614831513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a11cc5a-beac-4b6f-9d07-6e10dd9a8300 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.615433004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6287f18-5163-45c3-8f5e-6a8a126dd9c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.615503435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6287f18-5163-45c3-8f5e-6a8a126dd9c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:03:05 ha-168758 crio[658]: time="2024-12-05 00:03:05.615802534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db7ff0ab0780134387b2f431e1435c8d3535335883f819717e1a1d5a09be9b42,PodSandboxId:2a10916ad10461cf16a4f049400c4ce2b8244cd685c97d1b7fd42aa1093dbdaf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733356756267767746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qnw8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d9809d-958e-4df8-8b72-435c46d8030d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c,PodSandboxId:b695edf511c5e2a4fd031f1c9302adfcd8058dcd8248f0e35e6028e78287e3b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613713549764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgfvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa3c3a1a-7105-4d25-af5f-638f76d5dc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977,PodSandboxId:b16e3868f6d037a4b1b17ec5d811544f3b69e4c4725057eefb1de9ca052b0069,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733356613682080113,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rdbhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
84a1588a-ae3a-4cf6-b483-eeee3e386743,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862,PodSandboxId:bb748d0b54fa23681a6c5937621ccdd24d578694fd8e871521a856de7b39a80b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733356613605140482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5c35c5-c5c5-41e6-9452-a3e8e0525b03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302,PodSandboxId:f35f03db32634a0f1a34805be8a1c00e90ceb1327f2f7cf174862d5aa34b792d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733356601953567447,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9l7jr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b65009-de28-48a8-afc3-66d8a0aaf8a8,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2,PodSandboxId:29cc806b9c0660d3e963ebd1a850dc5024ad8940125250bf983c2ad3ceb951ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733356598
380059487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vjhjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e00e9325-7046-41b0-ade6-8cac8362e399,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd,PodSandboxId:399ac5ad9134840f2769d0528e58fcb3457a89d416ca099dbd06780afa8e4cdb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173335659096
7646418,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b79a47425cbd3637dfbec9a8cbddf4,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3,PodSandboxId:7fa6e474a7667742edafbd67958a1ee477c687072a4f7eac8f74f15a55c674b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733356587746556453,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5c96a4dd7f3f8d2d532f784e777953,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579,PodSandboxId:9e97775967d5d9fe027a7c795d079fb4af0d09cd6de19a2f65bb153ebd98fbdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733356587753118880,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f069d7eea6c7110e065e170679d8257,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8,PodSandboxId:c6a8751c9bc088cb4b48040b1b777de0b8dd7fa0ba7354cc6b454cec48c47790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733356587717490083,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4359e8a62956d9be4ff300e54d2dca,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619,PodSandboxId:05e1ede7541ebc57bfeb9baa703d958b15c09019773a24c9922ce401e62d3d0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733356587672682365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-168758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 216b15c2b5f2b8f51d538046badfbcd1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6287f18-5163-45c3-8f5e-6a8a126dd9c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db7ff0ab07801       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a10916ad1046       busybox-7dff88458-qnw8r
	acd6b28e94859       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b695edf511c5e       coredns-7c65d6cfc9-zgfvm
	775689612e70e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b16e3868f6d03       coredns-7c65d6cfc9-rdbhr
	ee81cf4f88af4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   bb748d0b54fa2       storage-provisioner
	4110b2797a4e4       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   f35f03db32634       kindnet-9l7jr
	54fe1d8daa049       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   29cc806b9c066       kube-proxy-vjhjm
	f9cdaa348b12d       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   399ac5ad91348       kube-vip-ha-168758
	f83a840ab3b70       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9e97775967d5d       kube-scheduler-ha-168758
	1729d6a102a14       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   7fa6e474a7667       kube-controller-manager-ha-168758
	9ba9268cc4c55       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c6a8751c9bc08       etcd-ha-168758
	413e537381433       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   05e1ede7541eb       kube-apiserver-ha-168758
	
	
	==> coredns [775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977] <==
	[INFO] 10.244.1.2:49235 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242119s
	[INFO] 10.244.1.2:36162 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00009811s
	[INFO] 10.244.1.2:39283 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000118789s
	[INFO] 10.244.2.2:54111 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004220931s
	[INFO] 10.244.2.2:32931 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237164s
	[INFO] 10.244.2.2:41295 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142312s
	[INFO] 10.244.2.2:53943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008473674s
	[INFO] 10.244.2.2:46770 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202477s
	[INFO] 10.244.2.2:55371 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129803s
	[INFO] 10.244.0.4:54472 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128336s
	[INFO] 10.244.0.4:56711 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094635s
	[INFO] 10.244.1.2:59375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133764s
	[INFO] 10.244.1.2:35780 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111638s
	[INFO] 10.244.1.2:34975 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070467s
	[INFO] 10.244.2.2:45753 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154754s
	[INFO] 10.244.2.2:33451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215593s
	[INFO] 10.244.2.2:40356 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085623s
	[INFO] 10.244.0.4:46816 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012269s
	[INFO] 10.244.0.4:47330 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069513s
	[INFO] 10.244.0.4:57640 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114217s
	[INFO] 10.244.1.2:33801 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096362s
	[INFO] 10.244.2.2:37705 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000200759s
	[INFO] 10.244.2.2:42211 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000190547s
	[INFO] 10.244.0.4:33141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134356s
	[INFO] 10.244.0.4:43968 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006677s
	
	
	==> coredns [acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c] <==
	[INFO] 10.244.2.2:49728 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316528s
	[INFO] 10.244.0.4:43307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166964s
	[INFO] 10.244.0.4:56572 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002062823s
	[INFO] 10.244.0.4:59507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000465964s
	[INFO] 10.244.0.4:41018 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122656s
	[INFO] 10.244.0.4:58493 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154857s
	[INFO] 10.244.0.4:52626 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075295s
	[INFO] 10.244.1.2:49458 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001666843s
	[INFO] 10.244.1.2:41288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102856s
	[INFO] 10.244.1.2:33580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001685081s
	[INFO] 10.244.1.2:42118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171665s
	[INFO] 10.244.1.2:59523 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108873s
	[INFO] 10.244.2.2:49031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000292612s
	[INFO] 10.244.0.4:56781 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089871s
	[INFO] 10.244.1.2:52724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218204s
	[INFO] 10.244.1.2:48399 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104187s
	[INFO] 10.244.1.2:60140 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087293s
	[INFO] 10.244.2.2:47493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233961s
	[INFO] 10.244.2.2:38060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000321274s
	[INFO] 10.244.0.4:49076 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086072s
	[INFO] 10.244.0.4:47481 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000284893s
	[INFO] 10.244.1.2:55588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252847s
	[INFO] 10.244.1.2:59106 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000257823s
	[INFO] 10.244.1.2:54163 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122038s
	[INFO] 10.244.1.2:39518 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016158s
	
	
	==> describe nodes <==
	Name:               ha-168758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_56_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:03:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:59:37 +0000   Wed, 04 Dec 2024 23:56:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-168758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cbaecf7e4184dafa60b26978754f7be
	  System UUID:                4cbaecf7-e418-4daf-a60b-26978754f7be
	  Boot ID:                    4bc94051-ae8a-40e7-9636-681faf9bf1fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qnw8r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-7c65d6cfc9-rdbhr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 coredns-7c65d6cfc9-zgfvm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 etcd-ha-168758                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-9l7jr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m28s
	  kube-system                 kube-apiserver-ha-168758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-ha-168758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-vjhjm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-scheduler-ha-168758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-168758                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m27s  kube-proxy       
	  Normal  Starting                 6m32s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m31s  kubelet          Node ha-168758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s  kubelet          Node ha-168758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s  kubelet          Node ha-168758 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	  Normal  NodeReady                6m12s  kubelet          Node ha-168758 status is now: NodeReady
	  Normal  RegisteredNode           5m28s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	  Normal  RegisteredNode           4m13s  node-controller  Node ha-168758 event: Registered Node ha-168758 in Controller
	
	
	Name:               ha-168758-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_57_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:57:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:00:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 23:59:31 +0000   Thu, 05 Dec 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-168758-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f10b54702f3c44dfba5cac04964ccd96
	  System UUID:                f10b5470-2f3c-44df-ba5c-ac04964ccd96
	  Boot ID:                    33cbfb56-c30f-45e3-b72b-7b95c35b9817
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vzls9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-168758-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m36s
	  kube-system                 kindnet-f4s96                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m37s
	  kube-system                 kube-apiserver-ha-168758-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-controller-manager-ha-168758-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-proxy-s57v4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-scheduler-ha-168758-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-vip-ha-168758-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m33s                  kube-proxy       
	  Normal  Starting                 5m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m37s (x2 over 5m37s)  kubelet          Node ha-168758-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x2 over 5m37s)  kubelet          Node ha-168758-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x2 over 5m37s)  kubelet          Node ha-168758-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  NodeReady                5m15s                  kubelet          Node ha-168758-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-168758-m02 event: Registered Node ha-168758-m02 in Controller
	  Normal  NodeNotReady             118s                   node-controller  Node ha-168758-m02 status is now: NodeNotReady
	
	
	Name:               ha-168758-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_58_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:58:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:02:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:58:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:59:44 +0000   Wed, 04 Dec 2024 23:59:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    ha-168758-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45e57e3f54964fb3a3aeed03051209b3
	  System UUID:                45e57e3f-5496-4fb3-a3ae-ed03051209b3
	  Boot ID:                    8e17ecfe-4d6f-4380-b2ec-d58e3528bee5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-brzwz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-168758-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kindnet-qzjm2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m22s
	  kube-system                 kube-apiserver-ha-168758-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-ha-168758-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-9hxpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-scheduler-ha-168758-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-168758-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x8 over 4m22s)  kubelet          Node ha-168758-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x8 over 4m22s)  kubelet          Node ha-168758-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x7 over 4m22s)  kubelet          Node ha-168758-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-168758-m03 event: Registered Node ha-168758-m03 in Controller
	
	
	Name:               ha-168758-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-168758-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-168758
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T23_59_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:59:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-168758-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:03:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 00:00:17 +0000   Wed, 04 Dec 2024 23:59:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 00:00:17 +0000   Thu, 05 Dec 2024 00:00:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    ha-168758-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 52df6477abaa40eb8464d395e2a11a95
	  System UUID:                52df6477-abaa-40eb-8464-d395e2a11a95
	  Boot ID:                    af84ed2d-18aa-4219-8f2e-d7ea7a1e76fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-964c5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m19s
	  kube-system                 kube-proxy-6r2gb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m19s (x2 over 3m19s)  kubelet          Node ha-168758-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m19s (x2 over 3m19s)  kubelet          Node ha-168758-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m19s (x2 over 3m19s)  kubelet          Node ha-168758-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-168758-m04 event: Registered Node ha-168758-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-168758-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 4 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037313] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 4 23:56] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.965232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.552645] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.572491] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.058309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061191] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.184233] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.102774] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.251131] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +3.779101] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.425619] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.056248] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.972412] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.078582] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.128288] kauditd_printk_skb: 33 callbacks suppressed
	[ +14.702984] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 4 23:57] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8] <==
	{"level":"warn","ts":"2024-12-05T00:03:05.747700Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.848253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.858528Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.866042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.869750Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.878503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.887745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.895563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.898826Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.902440Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.909493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.916156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.922263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.926404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.929696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.934636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.941035Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.948750Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.957109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.961107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.964115Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.967888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.974492Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:05.982649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T00:03:06.047670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"8efbd523922c356","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:03:06 up 7 min,  0 users,  load average: 0.21, 0.34, 0.19
	Linux ha-168758 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302] <==
	I1205 00:02:32.976349       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:42.976549       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:42.977185       1 main.go:301] handling current node
	I1205 00:02:42.977331       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:42.977368       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:42.977798       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:42.977846       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:42.978121       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:42.978436       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:02:52.983049       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:02:52.983086       1 main.go:301] handling current node
	I1205 00:02:52.983099       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:02:52.983104       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:02:52.983347       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:02:52.983354       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:02:52.983479       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:02:52.983484       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	I1205 00:03:02.979170       1 main.go:297] Handling node with IPs: map[192.168.39.246:{}]
	I1205 00:03:02.979392       1 main.go:301] handling current node
	I1205 00:03:02.979462       1 main.go:297] Handling node with IPs: map[192.168.39.70:{}]
	I1205 00:03:02.979483       1 main.go:324] Node ha-168758-m02 has CIDR [10.244.1.0/24] 
	I1205 00:03:02.979879       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1205 00:03:02.979911       1 main.go:324] Node ha-168758-m03 has CIDR [10.244.2.0/24] 
	I1205 00:03:02.980160       1 main.go:297] Handling node with IPs: map[192.168.39.118:{}]
	I1205 00:03:02.980192       1 main.go:324] Node ha-168758-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619] <==
	W1204 23:56:31.867915       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I1204 23:56:31.869149       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:56:31.873256       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 23:56:32.153052       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 23:56:33.976231       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 23:56:33.989474       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1204 23:56:34.001309       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 23:56:37.651703       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1204 23:56:37.764778       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1204 23:59:17.616400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33356: use of closed network connection
	E1204 23:59:17.796618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33360: use of closed network connection
	E1204 23:59:17.975175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33372: use of closed network connection
	E1204 23:59:18.159453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33388: use of closed network connection
	E1204 23:59:18.340465       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33408: use of closed network connection
	E1204 23:59:18.525899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33424: use of closed network connection
	E1204 23:59:18.699204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33448: use of closed network connection
	E1204 23:59:18.871260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33456: use of closed network connection
	E1204 23:59:19.044471       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33468: use of closed network connection
	E1204 23:59:19.324768       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33494: use of closed network connection
	E1204 23:59:19.503485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33512: use of closed network connection
	E1204 23:59:19.681162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33532: use of closed network connection
	E1204 23:59:19.864655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33548: use of closed network connection
	E1204 23:59:20.038197       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33568: use of closed network connection
	E1204 23:59:20.209773       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33584: use of closed network connection
	W1205 00:00:41.898434       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.211 192.168.39.246]
	
	
	==> kube-controller-manager [1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3] <==
	E1204 23:59:46.495910       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2xb9f failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2xb9f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1204 23:59:46.913622       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-168758-m04\" does not exist"
	I1204 23:59:46.947226       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-168758-m04" podCIDRs=["10.244.3.0/24"]
	I1204 23:59:46.947330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:46.947390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:46.951979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.179648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.195594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.195618       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-168758-m04"
	I1204 23:59:47.217355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.297136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:47.621862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1204 23:59:57.226529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:06.702516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:06.703393       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-168758-m04"
	I1205 00:00:06.718143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:07.112438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:00:17.922402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m04"
	I1205 00:01:07.143195       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-168758-m04"
	I1205 00:01:07.144088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.170589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.248294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	I1205 00:01:07.276459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.988319ms"
	I1205 00:01:07.277110       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.518µs"
	I1205 00:01:12.432512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-168758-m02"
	
	
	==> kube-proxy [54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:56:38.767631       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:56:38.783503       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	E1204 23:56:38.783650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:56:38.826822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:56:38.826910       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:56:38.827013       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:56:38.829514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:56:38.830207       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:56:38.830274       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:56:38.832929       1 config.go:199] "Starting service config controller"
	I1204 23:56:38.833423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:56:38.833660       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:56:38.833695       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:56:38.836133       1 config.go:328] "Starting node config controller"
	I1204 23:56:38.836155       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:56:38.934569       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:56:38.934627       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:56:38.936209       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579] <==
	W1204 23:56:30.307297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:30.307486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:30.307308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:30.307579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.211435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 23:56:31.211491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.247731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:56:31.247775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.414104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:31.414215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.426986       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:56:31.427111       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 23:56:31.430415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:31.430490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.446966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:56:31.447114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.462015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:56:31.462105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:56:31.482639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 23:56:31.482672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1204 23:56:33.294482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1204 23:58:43.853823       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qzjm2\": pod kindnet-qzjm2 is already assigned to node \"ha-168758-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qzjm2" node="ha-168758-m03"
	E1204 23:58:43.856189       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qzjm2\": pod kindnet-qzjm2 is already assigned to node \"ha-168758-m03\"" pod="kube-system/kindnet-qzjm2"
	E1204 23:59:46.998972       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6r2gb\": pod kube-proxy-6r2gb is already assigned to node \"ha-168758-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6r2gb" node="ha-168758-m04"
	E1204 23:59:46.999075       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6r2gb\": pod kube-proxy-6r2gb is already assigned to node \"ha-168758-m04\"" pod="kube-system/kube-proxy-6r2gb"
	
	
	==> kubelet <==
	Dec 05 00:01:34 ha-168758 kubelet[1298]: E1205 00:01:34.017719    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356894017416921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:34 ha-168758 kubelet[1298]: E1205 00:01:34.017769    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356894017416921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:44 ha-168758 kubelet[1298]: E1205 00:01:44.019770    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356904019230886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:44 ha-168758 kubelet[1298]: E1205 00:01:44.020092    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356904019230886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:54 ha-168758 kubelet[1298]: E1205 00:01:54.022125    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356914021750332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:01:54 ha-168758 kubelet[1298]: E1205 00:01:54.022148    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356914021750332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:04 ha-168758 kubelet[1298]: E1205 00:02:04.026278    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356924025030004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:04 ha-168758 kubelet[1298]: E1205 00:02:04.026320    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356924025030004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:14 ha-168758 kubelet[1298]: E1205 00:02:14.029261    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356934028834031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:14 ha-168758 kubelet[1298]: E1205 00:02:14.029299    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356934028834031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:24 ha-168758 kubelet[1298]: E1205 00:02:24.032005    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356944031328211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:24 ha-168758 kubelet[1298]: E1205 00:02:24.032327    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356944031328211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:33 ha-168758 kubelet[1298]: E1205 00:02:33.944974    1298 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 00:02:33 ha-168758 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 00:02:33 ha-168758 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 00:02:34 ha-168758 kubelet[1298]: E1205 00:02:34.034516    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356954034156458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:34 ha-168758 kubelet[1298]: E1205 00:02:34.034558    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356954034156458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:44 ha-168758 kubelet[1298]: E1205 00:02:44.037075    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356964036447971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:44 ha-168758 kubelet[1298]: E1205 00:02:44.037120    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356964036447971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:54 ha-168758 kubelet[1298]: E1205 00:02:54.039608    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356974039135405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:02:54 ha-168758 kubelet[1298]: E1205 00:02:54.039657    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356974039135405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:03:04 ha-168758 kubelet[1298]: E1205 00:03:04.041452    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356984041036988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 00:03:04 ha-168758 kubelet[1298]: E1205 00:03:04.041871    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733356984041036988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-168758 -n ha-168758
helpers_test.go:261: (dbg) Run:  kubectl --context ha-168758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (364.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-168758 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-168758 -v=7 --alsologtostderr
E1205 00:05:07.013992   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-168758 -v=7 --alsologtostderr: exit status 82 (2m1.896663816s)

                                                
                                                
-- stdout --
	* Stopping node "ha-168758-m04"  ...
	* Stopping node "ha-168758-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 00:03:07.039390   41839 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:03:07.039529   41839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:03:07.039538   41839 out.go:358] Setting ErrFile to fd 2...
	I1205 00:03:07.039543   41839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:03:07.039741   41839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:03:07.039999   41839 out.go:352] Setting JSON to false
	I1205 00:03:07.040096   41839 mustload.go:65] Loading cluster: ha-168758
	I1205 00:03:07.040537   41839 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:03:07.040665   41839 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1205 00:03:07.040851   41839 mustload.go:65] Loading cluster: ha-168758
	I1205 00:03:07.041011   41839 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:03:07.041045   41839 stop.go:39] StopHost: ha-168758-m04
	I1205 00:03:07.041407   41839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:03:07.041477   41839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:03:07.056429   41839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I1205 00:03:07.056897   41839 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:03:07.057456   41839 main.go:141] libmachine: Using API Version  1
	I1205 00:03:07.057476   41839 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:03:07.057803   41839 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:03:07.060484   41839 out.go:177] * Stopping node "ha-168758-m04"  ...
	I1205 00:03:07.061732   41839 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 00:03:07.061756   41839 main.go:141] libmachine: (ha-168758-m04) Calling .DriverName
	I1205 00:03:07.062038   41839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 00:03:07.062069   41839 main.go:141] libmachine: (ha-168758-m04) Calling .GetSSHHostname
	I1205 00:03:07.064640   41839 main.go:141] libmachine: (ha-168758-m04) DBG | domain ha-168758-m04 has defined MAC address 52:54:00:3b:b8:fb in network mk-ha-168758
	I1205 00:03:07.065107   41839 main.go:141] libmachine: (ha-168758-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:fb", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:59:35 +0000 UTC Type:0 Mac:52:54:00:3b:b8:fb Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-168758-m04 Clientid:01:52:54:00:3b:b8:fb}
	I1205 00:03:07.065145   41839 main.go:141] libmachine: (ha-168758-m04) DBG | domain ha-168758-m04 has defined IP address 192.168.39.118 and MAC address 52:54:00:3b:b8:fb in network mk-ha-168758
	I1205 00:03:07.065310   41839 main.go:141] libmachine: (ha-168758-m04) Calling .GetSSHPort
	I1205 00:03:07.065482   41839 main.go:141] libmachine: (ha-168758-m04) Calling .GetSSHKeyPath
	I1205 00:03:07.065655   41839 main.go:141] libmachine: (ha-168758-m04) Calling .GetSSHUsername
	I1205 00:03:07.065768   41839 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m04/id_rsa Username:docker}
	I1205 00:03:07.158213   41839 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 00:03:07.214365   41839 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 00:03:07.267832   41839 main.go:141] libmachine: Stopping "ha-168758-m04"...
	I1205 00:03:07.267863   41839 main.go:141] libmachine: (ha-168758-m04) Calling .GetState
	I1205 00:03:07.269722   41839 main.go:141] libmachine: (ha-168758-m04) Calling .Stop
	I1205 00:03:07.273421   41839 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 0/120
	I1205 00:03:08.456366   41839 main.go:141] libmachine: (ha-168758-m04) Calling .GetState
	I1205 00:03:08.457783   41839 main.go:141] libmachine: Machine "ha-168758-m04" was stopped.
	I1205 00:03:08.457800   41839 stop.go:75] duration metric: took 1.396070312s to stop
	I1205 00:03:08.457831   41839 stop.go:39] StopHost: ha-168758-m03
	I1205 00:03:08.458280   41839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:03:08.458330   41839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:03:08.473464   41839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I1205 00:03:08.473831   41839 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:03:08.474363   41839 main.go:141] libmachine: Using API Version  1
	I1205 00:03:08.474385   41839 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:03:08.474683   41839 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:03:08.476796   41839 out.go:177] * Stopping node "ha-168758-m03"  ...
	I1205 00:03:08.478130   41839 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 00:03:08.478160   41839 main.go:141] libmachine: (ha-168758-m03) Calling .DriverName
	I1205 00:03:08.478359   41839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 00:03:08.478386   41839 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHHostname
	I1205 00:03:08.481201   41839 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1205 00:03:08.481606   41839 main.go:141] libmachine: (ha-168758-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7a:4c", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:58:09 +0000 UTC Type:0 Mac:52:54:00:25:7a:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-168758-m03 Clientid:01:52:54:00:25:7a:4c}
	I1205 00:03:08.481636   41839 main.go:141] libmachine: (ha-168758-m03) DBG | domain ha-168758-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:25:7a:4c in network mk-ha-168758
	I1205 00:03:08.481817   41839 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHPort
	I1205 00:03:08.482025   41839 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHKeyPath
	I1205 00:03:08.482179   41839 main.go:141] libmachine: (ha-168758-m03) Calling .GetSSHUsername
	I1205 00:03:08.482300   41839 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m03/id_rsa Username:docker}
	I1205 00:03:08.571334   41839 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 00:03:08.626715   41839 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 00:03:08.681909   41839 main.go:141] libmachine: Stopping "ha-168758-m03"...
	I1205 00:03:08.681938   41839 main.go:141] libmachine: (ha-168758-m03) Calling .GetState
	I1205 00:03:08.683401   41839 main.go:141] libmachine: (ha-168758-m03) Calling .Stop
	I1205 00:03:08.686594   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 0/120
	I1205 00:03:09.688032   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 1/120
	I1205 00:03:10.689471   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 2/120
	I1205 00:03:11.691492   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 3/120
	I1205 00:03:12.692883   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 4/120
	I1205 00:03:13.695003   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 5/120
	I1205 00:03:14.696554   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 6/120
	I1205 00:03:15.698094   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 7/120
	I1205 00:03:16.699748   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 8/120
	I1205 00:03:17.701859   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 9/120
	I1205 00:03:18.703579   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 10/120
	I1205 00:03:19.705417   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 11/120
	I1205 00:03:20.706862   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 12/120
	I1205 00:03:21.708211   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 13/120
	I1205 00:03:22.710329   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 14/120
	I1205 00:03:23.712785   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 15/120
	I1205 00:03:24.714545   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 16/120
	I1205 00:03:25.716188   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 17/120
	I1205 00:03:26.717731   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 18/120
	I1205 00:03:27.719206   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 19/120
	I1205 00:03:28.721191   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 20/120
	I1205 00:03:29.722947   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 21/120
	I1205 00:03:30.724661   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 22/120
	I1205 00:03:31.726638   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 23/120
	I1205 00:03:32.728267   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 24/120
	I1205 00:03:33.730620   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 25/120
	I1205 00:03:34.732246   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 26/120
	I1205 00:03:35.734103   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 27/120
	I1205 00:03:36.736373   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 28/120
	I1205 00:03:37.738161   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 29/120
	I1205 00:03:38.739967   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 30/120
	I1205 00:03:39.741491   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 31/120
	I1205 00:03:40.743129   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 32/120
	I1205 00:03:41.744516   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 33/120
	I1205 00:03:42.746746   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 34/120
	I1205 00:03:43.748298   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 35/120
	I1205 00:03:44.749622   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 36/120
	I1205 00:03:45.751150   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 37/120
	I1205 00:03:46.752657   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 38/120
	I1205 00:03:47.753855   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 39/120
	I1205 00:03:48.755594   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 40/120
	I1205 00:03:49.756937   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 41/120
	I1205 00:03:50.758198   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 42/120
	I1205 00:03:51.759520   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 43/120
	I1205 00:03:52.760949   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 44/120
	I1205 00:03:53.762800   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 45/120
	I1205 00:03:54.764462   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 46/120
	I1205 00:03:55.766307   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 47/120
	I1205 00:03:56.767553   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 48/120
	I1205 00:03:57.769690   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 49/120
	I1205 00:03:58.771403   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 50/120
	I1205 00:03:59.772743   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 51/120
	I1205 00:04:00.774014   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 52/120
	I1205 00:04:01.775408   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 53/120
	I1205 00:04:02.777768   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 54/120
	I1205 00:04:03.779855   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 55/120
	I1205 00:04:04.781145   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 56/120
	I1205 00:04:05.782662   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 57/120
	I1205 00:04:06.783855   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 58/120
	I1205 00:04:07.785096   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 59/120
	I1205 00:04:08.787278   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 60/120
	I1205 00:04:09.788555   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 61/120
	I1205 00:04:10.790011   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 62/120
	I1205 00:04:11.791345   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 63/120
	I1205 00:04:12.792549   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 64/120
	I1205 00:04:13.794370   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 65/120
	I1205 00:04:14.795638   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 66/120
	I1205 00:04:15.796856   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 67/120
	I1205 00:04:16.798145   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 68/120
	I1205 00:04:17.800300   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 69/120
	I1205 00:04:18.801995   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 70/120
	I1205 00:04:19.803849   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 71/120
	I1205 00:04:20.805697   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 72/120
	I1205 00:04:21.807919   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 73/120
	I1205 00:04:22.809497   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 74/120
	I1205 00:04:23.811124   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 75/120
	I1205 00:04:24.812419   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 76/120
	I1205 00:04:25.814216   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 77/120
	I1205 00:04:26.816379   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 78/120
	I1205 00:04:27.817815   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 79/120
	I1205 00:04:28.819752   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 80/120
	I1205 00:04:29.821193   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 81/120
	I1205 00:04:30.823224   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 82/120
	I1205 00:04:31.824595   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 83/120
	I1205 00:04:32.825915   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 84/120
	I1205 00:04:33.827703   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 85/120
	I1205 00:04:34.828925   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 86/120
	I1205 00:04:35.830774   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 87/120
	I1205 00:04:36.832035   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 88/120
	I1205 00:04:37.833571   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 89/120
	I1205 00:04:38.835370   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 90/120
	I1205 00:04:39.836688   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 91/120
	I1205 00:04:40.837917   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 92/120
	I1205 00:04:41.839294   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 93/120
	I1205 00:04:42.840605   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 94/120
	I1205 00:04:43.842233   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 95/120
	I1205 00:04:44.844383   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 96/120
	I1205 00:04:45.845679   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 97/120
	I1205 00:04:46.848142   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 98/120
	I1205 00:04:47.849455   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 99/120
	I1205 00:04:48.851047   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 100/120
	I1205 00:04:49.852356   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 101/120
	I1205 00:04:50.853513   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 102/120
	I1205 00:04:51.854796   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 103/120
	I1205 00:04:52.855971   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 104/120
	I1205 00:04:53.857831   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 105/120
	I1205 00:04:54.859271   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 106/120
	I1205 00:04:55.860572   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 107/120
	I1205 00:04:56.861992   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 108/120
	I1205 00:04:57.863698   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 109/120
	I1205 00:04:58.865764   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 110/120
	I1205 00:04:59.867615   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 111/120
	I1205 00:05:00.869706   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 112/120
	I1205 00:05:01.871338   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 113/120
	I1205 00:05:02.873437   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 114/120
	I1205 00:05:03.875477   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 115/120
	I1205 00:05:04.876948   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 116/120
	I1205 00:05:05.878317   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 117/120
	I1205 00:05:06.880385   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 118/120
	I1205 00:05:07.881743   41839 main.go:141] libmachine: (ha-168758-m03) Waiting for machine to stop 119/120
	I1205 00:05:08.882554   41839 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 00:05:08.882628   41839 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 00:05:08.884611   41839 out.go:201] 
	W1205 00:05:08.886078   41839 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 00:05:08.886097   41839 out.go:270] * 
	* 
	W1205 00:05:08.888413   41839 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 00:05:08.889514   41839 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-168758 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-168758 --wait=true -v=7 --alsologtostderr
E1205 00:05:34.721480   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-168758 --wait=true -v=7 --alsologtostderr: (4m0.094691456s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-168758
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-168758 -n ha-168758
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 logs -n 25: (2.165402352s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m04 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp testdata/cp-test.txt                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m04_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03:/home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m03 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-168758 node stop m02 -v=7                                                     | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-168758 node start m02 -v=7                                                    | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-168758 -v=7                                                           | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-168758 -v=7                                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-168758 --wait=true -v=7                                                    | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:05 UTC | 05 Dec 24 00:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-168758                                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:09 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 00:05:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 00:05:08.938207   42343 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:05:08.938460   42343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:05:08.938468   42343 out.go:358] Setting ErrFile to fd 2...
	I1205 00:05:08.938474   42343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:05:08.938630   42343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:05:08.939188   42343 out.go:352] Setting JSON to false
	I1205 00:05:08.940053   42343 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6455,"bootTime":1733350654,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 00:05:08.940158   42343 start.go:139] virtualization: kvm guest
	I1205 00:05:08.942415   42343 out.go:177] * [ha-168758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 00:05:08.943689   42343 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:05:08.943735   42343 notify.go:220] Checking for updates...
	I1205 00:05:08.945936   42343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:05:08.947134   42343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:05:08.948111   42343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:05:08.949244   42343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 00:05:08.950407   42343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:05:08.951954   42343 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:05:08.952076   42343 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:05:08.952556   42343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:05:08.952633   42343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:05:08.969686   42343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35795
	I1205 00:05:08.970296   42343 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:05:08.970854   42343 main.go:141] libmachine: Using API Version  1
	I1205 00:05:08.970881   42343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:05:08.971308   42343 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:05:08.971523   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:05:09.007799   42343 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 00:05:09.009215   42343 start.go:297] selected driver: kvm2
	I1205 00:05:09.009232   42343 start.go:901] validating driver "kvm2" against &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.118 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:05:09.009431   42343 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:05:09.009732   42343 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:05:09.009796   42343 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 00:05:09.025026   42343 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 00:05:09.025704   42343 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 00:05:09.025737   42343 cni.go:84] Creating CNI manager for ""
	I1205 00:05:09.025786   42343 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 00:05:09.025849   42343 start.go:340] cluster config:
	{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.118 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:05:09.026014   42343 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:05:09.028087   42343 out.go:177] * Starting "ha-168758" primary control-plane node in "ha-168758" cluster
	I1205 00:05:09.029556   42343 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:05:09.029594   42343 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 00:05:09.029603   42343 cache.go:56] Caching tarball of preloaded images
	I1205 00:05:09.029696   42343 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 00:05:09.029710   42343 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 00:05:09.029831   42343 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1205 00:05:09.030061   42343 start.go:360] acquireMachinesLock for ha-168758: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 00:05:09.030109   42343 start.go:364] duration metric: took 27.723µs to acquireMachinesLock for "ha-168758"
	I1205 00:05:09.030131   42343 start.go:96] Skipping create...Using existing machine configuration
	I1205 00:05:09.030142   42343 fix.go:54] fixHost starting: 
	I1205 00:05:09.030404   42343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:05:09.030440   42343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:05:09.044700   42343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I1205 00:05:09.045283   42343 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:05:09.045766   42343 main.go:141] libmachine: Using API Version  1
	I1205 00:05:09.045788   42343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:05:09.046182   42343 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:05:09.046417   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:05:09.046611   42343 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1205 00:05:09.048174   42343 fix.go:112] recreateIfNeeded on ha-168758: state=Running err=<nil>
	W1205 00:05:09.048190   42343 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 00:05:09.050066   42343 out.go:177] * Updating the running kvm2 "ha-168758" VM ...
	I1205 00:05:09.051289   42343 machine.go:93] provisionDockerMachine start ...
	I1205 00:05:09.051310   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:05:09.051517   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.053879   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.054371   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.054396   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.054570   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.054724   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.054909   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.055037   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.055192   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:05:09.055492   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:05:09.055510   42343 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 00:05:09.172625   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758
	
	I1205 00:05:09.172651   42343 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1205 00:05:09.172878   42343 buildroot.go:166] provisioning hostname "ha-168758"
	I1205 00:05:09.172917   42343 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1205 00:05:09.173136   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.175740   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.176126   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.176159   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.176349   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.176531   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.176678   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.176815   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.176969   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:05:09.177134   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:05:09.177146   42343 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758 && echo "ha-168758" | sudo tee /etc/hostname
	I1205 00:05:09.310558   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758
	
	I1205 00:05:09.310593   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.313461   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.313788   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.313817   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.314053   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.314230   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.314369   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.314486   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.314629   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:05:09.314840   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:05:09.314857   42343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 00:05:09.431068   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 00:05:09.431104   42343 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1205 00:05:09.431120   42343 buildroot.go:174] setting up certificates
	I1205 00:05:09.431130   42343 provision.go:84] configureAuth start
	I1205 00:05:09.431138   42343 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1205 00:05:09.431396   42343 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1205 00:05:09.433894   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.434205   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.434229   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.434396   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.436824   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.437194   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.437214   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.437342   42343 provision.go:143] copyHostCerts
	I1205 00:05:09.437401   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:05:09.437437   42343 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1205 00:05:09.437446   42343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:05:09.437511   42343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1205 00:05:09.437620   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:05:09.437640   42343 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1205 00:05:09.437644   42343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:05:09.437668   42343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1205 00:05:09.437747   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:05:09.437767   42343 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1205 00:05:09.437771   42343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:05:09.437792   42343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1205 00:05:09.437838   42343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758 san=[127.0.0.1 192.168.39.246 ha-168758 localhost minikube]
	I1205 00:05:09.772880   42343 provision.go:177] copyRemoteCerts
	I1205 00:05:09.772935   42343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 00:05:09.772961   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.775822   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.776139   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.776155   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.776306   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.776503   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.776727   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.776913   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1205 00:05:09.864331   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 00:05:09.864479   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 00:05:09.891163   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 00:05:09.891232   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 00:05:09.919502   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 00:05:09.919572   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 00:05:09.947282   42343 provision.go:87] duration metric: took 516.137635ms to configureAuth
	I1205 00:05:09.947311   42343 buildroot.go:189] setting minikube options for container-runtime
	I1205 00:05:09.947594   42343 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:05:09.947671   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.950191   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.950591   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.950617   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.950713   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.950929   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.951131   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.951303   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.951512   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:05:09.951713   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:05:09.951738   42343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 00:06:40.837557   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 00:06:40.837614   42343 machine.go:96] duration metric: took 1m31.786308008s to provisionDockerMachine
	I1205 00:06:40.837630   42343 start.go:293] postStartSetup for "ha-168758" (driver="kvm2")
	I1205 00:06:40.837643   42343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 00:06:40.837666   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:40.837978   42343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 00:06:40.838006   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:40.840911   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:40.841270   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:40.841292   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:40.841462   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:40.841598   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:40.841775   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:40.841879   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1205 00:06:40.928323   42343 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 00:06:40.932271   42343 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 00:06:40.932295   42343 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1205 00:06:40.932351   42343 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1205 00:06:40.932444   42343 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1205 00:06:40.932455   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1205 00:06:40.932533   42343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 00:06:40.941203   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:06:40.962798   42343 start.go:296] duration metric: took 125.137658ms for postStartSetup
	I1205 00:06:40.962842   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:40.963161   42343 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1205 00:06:40.963189   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:40.965684   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:40.965994   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:40.966038   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:40.966327   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:40.966517   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:40.966680   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:40.966864   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	W1205 00:06:41.052726   42343 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1205 00:06:41.052759   42343 fix.go:56] duration metric: took 1m32.02261706s for fixHost
	I1205 00:06:41.052785   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:41.055247   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.055577   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:41.055616   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.055785   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:41.055978   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:41.056166   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:41.056332   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:41.056500   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:06:41.056658   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:06:41.056667   42343 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 00:06:41.170231   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733357201.144699345
	
	I1205 00:06:41.170255   42343 fix.go:216] guest clock: 1733357201.144699345
	I1205 00:06:41.170264   42343 fix.go:229] Guest: 2024-12-05 00:06:41.144699345 +0000 UTC Remote: 2024-12-05 00:06:41.052767937 +0000 UTC m=+92.152134861 (delta=91.931408ms)
	I1205 00:06:41.170296   42343 fix.go:200] guest clock delta is within tolerance: 91.931408ms
	I1205 00:06:41.170303   42343 start.go:83] releasing machines lock for "ha-168758", held for 1m32.140182188s
	I1205 00:06:41.170326   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:41.170572   42343 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1205 00:06:41.172903   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.173225   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:41.173246   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.173393   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:41.173869   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:41.174036   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:41.174138   42343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 00:06:41.174175   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:41.174284   42343 ssh_runner.go:195] Run: cat /version.json
	I1205 00:06:41.174319   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:41.176904   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.176974   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.177338   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:41.177363   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.177481   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:41.177514   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:41.177565   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.177635   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:41.177656   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:41.177822   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:41.177823   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:41.178027   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1205 00:06:41.178058   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:41.178199   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1205 00:06:41.268261   42343 ssh_runner.go:195] Run: systemctl --version
	I1205 00:06:41.301655   42343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 00:06:41.470236   42343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 00:06:41.479741   42343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 00:06:41.479805   42343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 00:06:41.489572   42343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 00:06:41.489589   42343 start.go:495] detecting cgroup driver to use...
	I1205 00:06:41.489632   42343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 00:06:41.505607   42343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 00:06:41.520393   42343 docker.go:217] disabling cri-docker service (if available) ...
	I1205 00:06:41.520457   42343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 00:06:41.533860   42343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 00:06:41.547032   42343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 00:06:41.694361   42343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 00:06:41.837770   42343 docker.go:233] disabling docker service ...
	I1205 00:06:41.837830   42343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 00:06:41.854200   42343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 00:06:41.866840   42343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 00:06:42.017830   42343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 00:06:42.176470   42343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 00:06:42.190520   42343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 00:06:42.209105   42343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 00:06:42.209173   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.218938   42343 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 00:06:42.218999   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.228932   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.239074   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.249732   42343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 00:06:42.259975   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.269784   42343 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.280151   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.289814   42343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 00:06:42.299081   42343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 00:06:42.307857   42343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:06:42.448010   42343 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 00:06:44.315321   42343 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.86726863s)
	I1205 00:06:44.315370   42343 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 00:06:44.315424   42343 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 00:06:44.320071   42343 start.go:563] Will wait 60s for crictl version
	I1205 00:06:44.320126   42343 ssh_runner.go:195] Run: which crictl
	I1205 00:06:44.323583   42343 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 00:06:44.359656   42343 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 00:06:44.359730   42343 ssh_runner.go:195] Run: crio --version
	I1205 00:06:44.387133   42343 ssh_runner.go:195] Run: crio --version
	I1205 00:06:44.416532   42343 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 00:06:44.417877   42343 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1205 00:06:44.420348   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:44.420767   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:44.420794   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:44.421019   42343 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 00:06:44.425611   42343 kubeadm.go:883] updating cluster {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.118 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 00:06:44.425740   42343 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:06:44.425782   42343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:06:44.471926   42343 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:06:44.471947   42343 crio.go:433] Images already preloaded, skipping extraction
	I1205 00:06:44.472007   42343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:06:44.504812   42343 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:06:44.504835   42343 cache_images.go:84] Images are preloaded, skipping loading
	I1205 00:06:44.504843   42343 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.2 crio true true} ...
	I1205 00:06:44.504969   42343 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 00:06:44.505073   42343 ssh_runner.go:195] Run: crio config
	I1205 00:06:44.551500   42343 cni.go:84] Creating CNI manager for ""
	I1205 00:06:44.551522   42343 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 00:06:44.551531   42343 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 00:06:44.551558   42343 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-168758 NodeName:ha-168758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 00:06:44.551676   42343 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-168758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 00:06:44.551699   42343 kube-vip.go:115] generating kube-vip config ...
	I1205 00:06:44.551763   42343 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 00:06:44.562848   42343 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 00:06:44.562964   42343 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 00:06:44.563014   42343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 00:06:44.572015   42343 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 00:06:44.572064   42343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 00:06:44.580538   42343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 00:06:44.595789   42343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 00:06:44.611063   42343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 00:06:44.626623   42343 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 00:06:44.642458   42343 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 00:06:44.647067   42343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:06:44.787487   42343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:06:44.801541   42343 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.246
	I1205 00:06:44.801562   42343 certs.go:194] generating shared ca certs ...
	I1205 00:06:44.801588   42343 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:06:44.801724   42343 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1205 00:06:44.801765   42343 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1205 00:06:44.801775   42343 certs.go:256] generating profile certs ...
	I1205 00:06:44.801848   42343 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1205 00:06:44.801874   42343 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.bd26fa17
	I1205 00:06:44.801892   42343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.bd26fa17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.211 192.168.39.254]
	I1205 00:06:44.872877   42343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.bd26fa17 ...
	I1205 00:06:44.872906   42343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.bd26fa17: {Name:mkfc8c2d0eb166a47f30bbdc85d581d701b23d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:06:44.873072   42343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.bd26fa17 ...
	I1205 00:06:44.873083   42343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.bd26fa17: {Name:mk0625ec44fad7ec5524605fa122e4306d4c6dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:06:44.873165   42343 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.bd26fa17 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1205 00:06:44.873327   42343 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.bd26fa17 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1205 00:06:44.873450   42343 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1205 00:06:44.873464   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 00:06:44.873477   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 00:06:44.873487   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 00:06:44.873500   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 00:06:44.873510   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 00:06:44.873522   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 00:06:44.873531   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 00:06:44.873543   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 00:06:44.873583   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1205 00:06:44.873610   42343 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1205 00:06:44.873619   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 00:06:44.873638   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1205 00:06:44.873659   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1205 00:06:44.873679   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1205 00:06:44.873721   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:06:44.873745   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1205 00:06:44.873758   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1205 00:06:44.873771   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:06:44.874401   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 00:06:44.898496   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 00:06:44.921108   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 00:06:44.943429   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 00:06:44.965691   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 00:06:44.987885   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 00:06:45.011390   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 00:06:45.034167   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 00:06:45.057443   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1205 00:06:45.079893   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1205 00:06:45.101792   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 00:06:45.124012   42343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 00:06:45.139176   42343 ssh_runner.go:195] Run: openssl version
	I1205 00:06:45.144597   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1205 00:06:45.154547   42343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1205 00:06:45.159170   42343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:06:45.159254   42343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1205 00:06:45.164742   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1205 00:06:45.173383   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1205 00:06:45.183023   42343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1205 00:06:45.186952   42343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:06:45.187013   42343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1205 00:06:45.192156   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 00:06:45.200699   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 00:06:45.210176   42343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:06:45.214220   42343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:06:45.214259   42343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:06:45.219747   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 00:06:45.228171   42343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:06:45.232178   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 00:06:45.237426   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 00:06:45.242697   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 00:06:45.247758   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 00:06:45.252927   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 00:06:45.258130   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 00:06:45.263094   42343 kubeadm.go:392] StartCluster: {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.118 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:06:45.263189   42343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 00:06:45.263224   42343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:06:45.297607   42343 cri.go:89] found id: "7d1d0ce2d5700424631af163ab9a351842fca45d03f414c7acc753b2b958d51f"
	I1205 00:06:45.297628   42343 cri.go:89] found id: "761fd6abd68ce8d95999e639827fe67e574790e0988a1d0d22c050fd9bc008b6"
	I1205 00:06:45.297632   42343 cri.go:89] found id: "556aaa4a72a7fbe61fdbaa788d2ab07d68962afabfe3b1a093b3c492bd7e7ae7"
	I1205 00:06:45.297634   42343 cri.go:89] found id: "acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c"
	I1205 00:06:45.297637   42343 cri.go:89] found id: "775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977"
	I1205 00:06:45.297640   42343 cri.go:89] found id: "ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862"
	I1205 00:06:45.297643   42343 cri.go:89] found id: "4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302"
	I1205 00:06:45.297646   42343 cri.go:89] found id: "54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2"
	I1205 00:06:45.297648   42343 cri.go:89] found id: "f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd"
	I1205 00:06:45.297654   42343 cri.go:89] found id: "f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579"
	I1205 00:06:45.297656   42343 cri.go:89] found id: "1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3"
	I1205 00:06:45.297659   42343 cri.go:89] found id: "9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8"
	I1205 00:06:45.297662   42343 cri.go:89] found id: "413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619"
	I1205 00:06:45.297665   42343 cri.go:89] found id: ""
	I1205 00:06:45.297698   42343 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-168758 -n ha-168758
helpers_test.go:261: (dbg) Run:  kubectl --context ha-168758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (364.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 stop -v=7 --alsologtostderr
E1205 00:10:07.014791   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-168758 stop -v=7 --alsologtostderr: exit status 82 (2m0.48435748s)

                                                
                                                
-- stdout --
	* Stopping node "ha-168758-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 00:09:29.150935   44048 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:09:29.151079   44048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:09:29.151093   44048 out.go:358] Setting ErrFile to fd 2...
	I1205 00:09:29.151099   44048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:09:29.151271   44048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:09:29.151500   44048 out.go:352] Setting JSON to false
	I1205 00:09:29.151571   44048 mustload.go:65] Loading cluster: ha-168758
	I1205 00:09:29.151943   44048 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:09:29.152043   44048 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1205 00:09:29.152217   44048 mustload.go:65] Loading cluster: ha-168758
	I1205 00:09:29.152344   44048 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:09:29.152382   44048 stop.go:39] StopHost: ha-168758-m04
	I1205 00:09:29.152720   44048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:09:29.152773   44048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:09:29.167730   44048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I1205 00:09:29.168233   44048 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:09:29.168804   44048 main.go:141] libmachine: Using API Version  1
	I1205 00:09:29.168830   44048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:09:29.169210   44048 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:09:29.170974   44048 out.go:177] * Stopping node "ha-168758-m04"  ...
	I1205 00:09:29.172698   44048 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 00:09:29.172727   44048 main.go:141] libmachine: (ha-168758-m04) Calling .DriverName
	I1205 00:09:29.173086   44048 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 00:09:29.173112   44048 main.go:141] libmachine: (ha-168758-m04) Calling .GetSSHHostname
	I1205 00:09:29.176157   44048 main.go:141] libmachine: (ha-168758-m04) DBG | domain ha-168758-m04 has defined MAC address 52:54:00:3b:b8:fb in network mk-ha-168758
	I1205 00:09:29.176636   44048 main.go:141] libmachine: (ha-168758-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:fb", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 01:08:56 +0000 UTC Type:0 Mac:52:54:00:3b:b8:fb Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-168758-m04 Clientid:01:52:54:00:3b:b8:fb}
	I1205 00:09:29.176661   44048 main.go:141] libmachine: (ha-168758-m04) DBG | domain ha-168758-m04 has defined IP address 192.168.39.118 and MAC address 52:54:00:3b:b8:fb in network mk-ha-168758
	I1205 00:09:29.176859   44048 main.go:141] libmachine: (ha-168758-m04) Calling .GetSSHPort
	I1205 00:09:29.177040   44048 main.go:141] libmachine: (ha-168758-m04) Calling .GetSSHKeyPath
	I1205 00:09:29.177163   44048 main.go:141] libmachine: (ha-168758-m04) Calling .GetSSHUsername
	I1205 00:09:29.177331   44048 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758-m04/id_rsa Username:docker}
	I1205 00:09:29.265148   44048 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 00:09:29.318111   44048 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 00:09:29.372263   44048 main.go:141] libmachine: Stopping "ha-168758-m04"...
	I1205 00:09:29.372373   44048 main.go:141] libmachine: (ha-168758-m04) Calling .GetState
	I1205 00:09:29.374244   44048 main.go:141] libmachine: (ha-168758-m04) Calling .Stop
	I1205 00:09:29.378597   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 0/120
	I1205 00:09:30.380125   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 1/120
	I1205 00:09:31.381994   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 2/120
	I1205 00:09:32.383636   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 3/120
	I1205 00:09:33.385155   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 4/120
	I1205 00:09:34.387312   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 5/120
	I1205 00:09:35.389076   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 6/120
	I1205 00:09:36.390323   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 7/120
	I1205 00:09:37.391604   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 8/120
	I1205 00:09:38.394158   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 9/120
	I1205 00:09:39.395590   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 10/120
	I1205 00:09:40.397243   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 11/120
	I1205 00:09:41.398501   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 12/120
	I1205 00:09:42.399683   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 13/120
	I1205 00:09:43.400868   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 14/120
	I1205 00:09:44.402928   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 15/120
	I1205 00:09:45.404474   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 16/120
	I1205 00:09:46.405757   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 17/120
	I1205 00:09:47.408226   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 18/120
	I1205 00:09:48.410710   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 19/120
	I1205 00:09:49.413045   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 20/120
	I1205 00:09:50.414884   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 21/120
	I1205 00:09:51.416214   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 22/120
	I1205 00:09:52.417703   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 23/120
	I1205 00:09:53.419990   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 24/120
	I1205 00:09:54.422022   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 25/120
	I1205 00:09:55.423295   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 26/120
	I1205 00:09:56.424709   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 27/120
	I1205 00:09:57.426236   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 28/120
	I1205 00:09:58.428513   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 29/120
	I1205 00:09:59.430725   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 30/120
	I1205 00:10:00.431940   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 31/120
	I1205 00:10:01.433326   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 32/120
	I1205 00:10:02.434699   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 33/120
	I1205 00:10:03.436148   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 34/120
	I1205 00:10:04.437249   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 35/120
	I1205 00:10:05.438563   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 36/120
	I1205 00:10:06.439883   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 37/120
	I1205 00:10:07.441237   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 38/120
	I1205 00:10:08.442600   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 39/120
	I1205 00:10:09.444657   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 40/120
	I1205 00:10:10.446077   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 41/120
	I1205 00:10:11.447371   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 42/120
	I1205 00:10:12.448681   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 43/120
	I1205 00:10:13.449996   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 44/120
	I1205 00:10:14.451712   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 45/120
	I1205 00:10:15.453127   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 46/120
	I1205 00:10:16.454303   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 47/120
	I1205 00:10:17.456653   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 48/120
	I1205 00:10:18.458192   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 49/120
	I1205 00:10:19.460560   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 50/120
	I1205 00:10:20.462672   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 51/120
	I1205 00:10:21.464407   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 52/120
	I1205 00:10:22.465846   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 53/120
	I1205 00:10:23.467268   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 54/120
	I1205 00:10:24.468966   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 55/120
	I1205 00:10:25.470256   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 56/120
	I1205 00:10:26.472639   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 57/120
	I1205 00:10:27.473912   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 58/120
	I1205 00:10:28.475410   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 59/120
	I1205 00:10:29.477578   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 60/120
	I1205 00:10:30.478872   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 61/120
	I1205 00:10:31.480302   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 62/120
	I1205 00:10:32.481873   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 63/120
	I1205 00:10:33.483388   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 64/120
	I1205 00:10:34.485108   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 65/120
	I1205 00:10:35.486496   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 66/120
	I1205 00:10:36.488370   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 67/120
	I1205 00:10:37.490307   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 68/120
	I1205 00:10:38.492632   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 69/120
	I1205 00:10:39.494969   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 70/120
	I1205 00:10:40.496443   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 71/120
	I1205 00:10:41.498597   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 72/120
	I1205 00:10:42.500991   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 73/120
	I1205 00:10:43.502394   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 74/120
	I1205 00:10:44.504389   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 75/120
	I1205 00:10:45.506191   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 76/120
	I1205 00:10:46.508409   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 77/120
	I1205 00:10:47.510029   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 78/120
	I1205 00:10:48.511773   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 79/120
	I1205 00:10:49.514078   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 80/120
	I1205 00:10:50.515444   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 81/120
	I1205 00:10:51.517622   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 82/120
	I1205 00:10:52.519148   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 83/120
	I1205 00:10:53.520688   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 84/120
	I1205 00:10:54.522653   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 85/120
	I1205 00:10:55.524512   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 86/120
	I1205 00:10:56.525913   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 87/120
	I1205 00:10:57.527529   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 88/120
	I1205 00:10:58.528894   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 89/120
	I1205 00:10:59.531188   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 90/120
	I1205 00:11:00.532855   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 91/120
	I1205 00:11:01.534668   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 92/120
	I1205 00:11:02.536237   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 93/120
	I1205 00:11:03.537898   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 94/120
	I1205 00:11:04.539260   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 95/120
	I1205 00:11:05.540592   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 96/120
	I1205 00:11:06.541867   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 97/120
	I1205 00:11:07.543237   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 98/120
	I1205 00:11:08.544786   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 99/120
	I1205 00:11:09.546853   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 100/120
	I1205 00:11:10.548358   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 101/120
	I1205 00:11:11.550408   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 102/120
	I1205 00:11:12.551910   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 103/120
	I1205 00:11:13.553271   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 104/120
	I1205 00:11:14.555216   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 105/120
	I1205 00:11:15.556543   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 106/120
	I1205 00:11:16.557852   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 107/120
	I1205 00:11:17.559092   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 108/120
	I1205 00:11:18.560547   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 109/120
	I1205 00:11:19.562501   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 110/120
	I1205 00:11:20.563776   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 111/120
	I1205 00:11:21.565167   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 112/120
	I1205 00:11:22.566547   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 113/120
	I1205 00:11:23.568343   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 114/120
	I1205 00:11:24.570357   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 115/120
	I1205 00:11:25.572398   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 116/120
	I1205 00:11:26.573814   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 117/120
	I1205 00:11:27.575185   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 118/120
	I1205 00:11:28.576695   44048 main.go:141] libmachine: (ha-168758-m04) Waiting for machine to stop 119/120
	I1205 00:11:29.577930   44048 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 00:11:29.578000   44048 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 00:11:29.580190   44048 out.go:201] 
	W1205 00:11:29.582167   44048 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 00:11:29.582184   44048 out.go:270] * 
	* 
	W1205 00:11:29.584316   44048 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 00:11:29.585695   44048 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-168758 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr: (18.850356421s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-168758 -n ha-168758
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 logs -n 25: (1.902459974s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m04 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp testdata/cp-test.txt                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758:/home/docker/cp-test_ha-168758-m04_ha-168758.txt                       |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758 sudo cat                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758.txt                                 |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m02:/home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m02 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m03:/home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n                                                                 | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | ha-168758-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-168758 ssh -n ha-168758-m03 sudo cat                                          | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | /home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-168758 node stop m02 -v=7                                                     | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-168758 node start m02 -v=7                                                    | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-168758 -v=7                                                           | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-168758 -v=7                                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-168758 --wait=true -v=7                                                    | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:05 UTC | 05 Dec 24 00:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-168758                                                                | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:09 UTC |                     |
	| node    | ha-168758 node delete m03 -v=7                                                   | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:09 UTC | 05 Dec 24 00:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-168758 stop -v=7                                                              | ha-168758 | jenkins | v1.34.0 | 05 Dec 24 00:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 00:05:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 00:05:08.938207   42343 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:05:08.938460   42343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:05:08.938468   42343 out.go:358] Setting ErrFile to fd 2...
	I1205 00:05:08.938474   42343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:05:08.938630   42343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:05:08.939188   42343 out.go:352] Setting JSON to false
	I1205 00:05:08.940053   42343 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6455,"bootTime":1733350654,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 00:05:08.940158   42343 start.go:139] virtualization: kvm guest
	I1205 00:05:08.942415   42343 out.go:177] * [ha-168758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 00:05:08.943689   42343 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:05:08.943735   42343 notify.go:220] Checking for updates...
	I1205 00:05:08.945936   42343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:05:08.947134   42343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:05:08.948111   42343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:05:08.949244   42343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 00:05:08.950407   42343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:05:08.951954   42343 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:05:08.952076   42343 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:05:08.952556   42343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:05:08.952633   42343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:05:08.969686   42343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35795
	I1205 00:05:08.970296   42343 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:05:08.970854   42343 main.go:141] libmachine: Using API Version  1
	I1205 00:05:08.970881   42343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:05:08.971308   42343 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:05:08.971523   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:05:09.007799   42343 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 00:05:09.009215   42343 start.go:297] selected driver: kvm2
	I1205 00:05:09.009232   42343 start.go:901] validating driver "kvm2" against &{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.118 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:05:09.009431   42343 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:05:09.009732   42343 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:05:09.009796   42343 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 00:05:09.025026   42343 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 00:05:09.025704   42343 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 00:05:09.025737   42343 cni.go:84] Creating CNI manager for ""
	I1205 00:05:09.025786   42343 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 00:05:09.025849   42343 start.go:340] cluster config:
	{Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.118 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:05:09.026014   42343 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:05:09.028087   42343 out.go:177] * Starting "ha-168758" primary control-plane node in "ha-168758" cluster
	I1205 00:05:09.029556   42343 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:05:09.029594   42343 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 00:05:09.029603   42343 cache.go:56] Caching tarball of preloaded images
	I1205 00:05:09.029696   42343 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 00:05:09.029710   42343 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 00:05:09.029831   42343 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/config.json ...
	I1205 00:05:09.030061   42343 start.go:360] acquireMachinesLock for ha-168758: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 00:05:09.030109   42343 start.go:364] duration metric: took 27.723µs to acquireMachinesLock for "ha-168758"
	I1205 00:05:09.030131   42343 start.go:96] Skipping create...Using existing machine configuration
	I1205 00:05:09.030142   42343 fix.go:54] fixHost starting: 
	I1205 00:05:09.030404   42343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:05:09.030440   42343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:05:09.044700   42343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I1205 00:05:09.045283   42343 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:05:09.045766   42343 main.go:141] libmachine: Using API Version  1
	I1205 00:05:09.045788   42343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:05:09.046182   42343 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:05:09.046417   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:05:09.046611   42343 main.go:141] libmachine: (ha-168758) Calling .GetState
	I1205 00:05:09.048174   42343 fix.go:112] recreateIfNeeded on ha-168758: state=Running err=<nil>
	W1205 00:05:09.048190   42343 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 00:05:09.050066   42343 out.go:177] * Updating the running kvm2 "ha-168758" VM ...
	I1205 00:05:09.051289   42343 machine.go:93] provisionDockerMachine start ...
	I1205 00:05:09.051310   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:05:09.051517   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.053879   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.054371   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.054396   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.054570   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.054724   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.054909   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.055037   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.055192   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:05:09.055492   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:05:09.055510   42343 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 00:05:09.172625   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758
	
	I1205 00:05:09.172651   42343 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1205 00:05:09.172878   42343 buildroot.go:166] provisioning hostname "ha-168758"
	I1205 00:05:09.172917   42343 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1205 00:05:09.173136   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.175740   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.176126   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.176159   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.176349   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.176531   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.176678   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.176815   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.176969   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:05:09.177134   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:05:09.177146   42343 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-168758 && echo "ha-168758" | sudo tee /etc/hostname
	I1205 00:05:09.310558   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-168758
	
	I1205 00:05:09.310593   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.313461   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.313788   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.313817   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.314053   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.314230   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.314369   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.314486   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.314629   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:05:09.314840   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:05:09.314857   42343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-168758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-168758/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-168758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 00:05:09.431068   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 00:05:09.431104   42343 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1205 00:05:09.431120   42343 buildroot.go:174] setting up certificates
	I1205 00:05:09.431130   42343 provision.go:84] configureAuth start
	I1205 00:05:09.431138   42343 main.go:141] libmachine: (ha-168758) Calling .GetMachineName
	I1205 00:05:09.431396   42343 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1205 00:05:09.433894   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.434205   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.434229   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.434396   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.436824   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.437194   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.437214   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.437342   42343 provision.go:143] copyHostCerts
	I1205 00:05:09.437401   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:05:09.437437   42343 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1205 00:05:09.437446   42343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:05:09.437511   42343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1205 00:05:09.437620   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:05:09.437640   42343 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1205 00:05:09.437644   42343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:05:09.437668   42343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1205 00:05:09.437747   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:05:09.437767   42343 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1205 00:05:09.437771   42343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:05:09.437792   42343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1205 00:05:09.437838   42343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.ha-168758 san=[127.0.0.1 192.168.39.246 ha-168758 localhost minikube]
	I1205 00:05:09.772880   42343 provision.go:177] copyRemoteCerts
	I1205 00:05:09.772935   42343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 00:05:09.772961   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.775822   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.776139   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.776155   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.776306   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.776503   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.776727   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.776913   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1205 00:05:09.864331   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 00:05:09.864479   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 00:05:09.891163   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 00:05:09.891232   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 00:05:09.919502   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 00:05:09.919572   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 00:05:09.947282   42343 provision.go:87] duration metric: took 516.137635ms to configureAuth
	I1205 00:05:09.947311   42343 buildroot.go:189] setting minikube options for container-runtime
	I1205 00:05:09.947594   42343 config.go:182] Loaded profile config "ha-168758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:05:09.947671   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:05:09.950191   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.950591   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:05:09.950617   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:05:09.950713   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:05:09.950929   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.951131   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:05:09.951303   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:05:09.951512   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:05:09.951713   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:05:09.951738   42343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 00:06:40.837557   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 00:06:40.837614   42343 machine.go:96] duration metric: took 1m31.786308008s to provisionDockerMachine
	I1205 00:06:40.837630   42343 start.go:293] postStartSetup for "ha-168758" (driver="kvm2")
	I1205 00:06:40.837643   42343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 00:06:40.837666   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:40.837978   42343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 00:06:40.838006   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:40.840911   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:40.841270   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:40.841292   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:40.841462   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:40.841598   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:40.841775   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:40.841879   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1205 00:06:40.928323   42343 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 00:06:40.932271   42343 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 00:06:40.932295   42343 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1205 00:06:40.932351   42343 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1205 00:06:40.932444   42343 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1205 00:06:40.932455   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1205 00:06:40.932533   42343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 00:06:40.941203   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:06:40.962798   42343 start.go:296] duration metric: took 125.137658ms for postStartSetup
	I1205 00:06:40.962842   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:40.963161   42343 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1205 00:06:40.963189   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:40.965684   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:40.965994   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:40.966038   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:40.966327   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:40.966517   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:40.966680   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:40.966864   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	W1205 00:06:41.052726   42343 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1205 00:06:41.052759   42343 fix.go:56] duration metric: took 1m32.02261706s for fixHost
	I1205 00:06:41.052785   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:41.055247   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.055577   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:41.055616   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.055785   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:41.055978   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:41.056166   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:41.056332   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:41.056500   42343 main.go:141] libmachine: Using SSH client type: native
	I1205 00:06:41.056658   42343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1205 00:06:41.056667   42343 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 00:06:41.170231   42343 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733357201.144699345
	
	I1205 00:06:41.170255   42343 fix.go:216] guest clock: 1733357201.144699345
	I1205 00:06:41.170264   42343 fix.go:229] Guest: 2024-12-05 00:06:41.144699345 +0000 UTC Remote: 2024-12-05 00:06:41.052767937 +0000 UTC m=+92.152134861 (delta=91.931408ms)
	I1205 00:06:41.170296   42343 fix.go:200] guest clock delta is within tolerance: 91.931408ms
	I1205 00:06:41.170303   42343 start.go:83] releasing machines lock for "ha-168758", held for 1m32.140182188s
	I1205 00:06:41.170326   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:41.170572   42343 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1205 00:06:41.172903   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.173225   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:41.173246   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.173393   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:41.173869   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:41.174036   42343 main.go:141] libmachine: (ha-168758) Calling .DriverName
	I1205 00:06:41.174138   42343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 00:06:41.174175   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:41.174284   42343 ssh_runner.go:195] Run: cat /version.json
	I1205 00:06:41.174319   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHHostname
	I1205 00:06:41.176904   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.176974   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.177338   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:41.177363   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.177481   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:41.177514   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:41.177565   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:41.177635   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:41.177656   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHPort
	I1205 00:06:41.177822   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHKeyPath
	I1205 00:06:41.177823   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:41.178027   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1205 00:06:41.178058   42343 main.go:141] libmachine: (ha-168758) Calling .GetSSHUsername
	I1205 00:06:41.178199   42343 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/ha-168758/id_rsa Username:docker}
	I1205 00:06:41.268261   42343 ssh_runner.go:195] Run: systemctl --version
	I1205 00:06:41.301655   42343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 00:06:41.470236   42343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 00:06:41.479741   42343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 00:06:41.479805   42343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 00:06:41.489572   42343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 00:06:41.489589   42343 start.go:495] detecting cgroup driver to use...
	I1205 00:06:41.489632   42343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 00:06:41.505607   42343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 00:06:41.520393   42343 docker.go:217] disabling cri-docker service (if available) ...
	I1205 00:06:41.520457   42343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 00:06:41.533860   42343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 00:06:41.547032   42343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 00:06:41.694361   42343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 00:06:41.837770   42343 docker.go:233] disabling docker service ...
	I1205 00:06:41.837830   42343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 00:06:41.854200   42343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 00:06:41.866840   42343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 00:06:42.017830   42343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 00:06:42.176470   42343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 00:06:42.190520   42343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 00:06:42.209105   42343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 00:06:42.209173   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.218938   42343 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 00:06:42.218999   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.228932   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.239074   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.249732   42343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 00:06:42.259975   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.269784   42343 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.280151   42343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:06:42.289814   42343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 00:06:42.299081   42343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 00:06:42.307857   42343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:06:42.448010   42343 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 00:06:44.315321   42343 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.86726863s)
	I1205 00:06:44.315370   42343 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 00:06:44.315424   42343 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 00:06:44.320071   42343 start.go:563] Will wait 60s for crictl version
	I1205 00:06:44.320126   42343 ssh_runner.go:195] Run: which crictl
	I1205 00:06:44.323583   42343 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 00:06:44.359656   42343 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 00:06:44.359730   42343 ssh_runner.go:195] Run: crio --version
	I1205 00:06:44.387133   42343 ssh_runner.go:195] Run: crio --version
	I1205 00:06:44.416532   42343 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 00:06:44.417877   42343 main.go:141] libmachine: (ha-168758) Calling .GetIP
	I1205 00:06:44.420348   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:44.420767   42343 main.go:141] libmachine: (ha-168758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a0:85", ip: ""} in network mk-ha-168758: {Iface:virbr1 ExpiryTime:2024-12-05 00:56:09 +0000 UTC Type:0 Mac:52:54:00:e5:a0:85 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-168758 Clientid:01:52:54:00:e5:a0:85}
	I1205 00:06:44.420794   42343 main.go:141] libmachine: (ha-168758) DBG | domain ha-168758 has defined IP address 192.168.39.246 and MAC address 52:54:00:e5:a0:85 in network mk-ha-168758
	I1205 00:06:44.421019   42343 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 00:06:44.425611   42343 kubeadm.go:883] updating cluster {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.118 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 00:06:44.425740   42343 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:06:44.425782   42343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:06:44.471926   42343 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:06:44.471947   42343 crio.go:433] Images already preloaded, skipping extraction
	I1205 00:06:44.472007   42343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:06:44.504812   42343 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:06:44.504835   42343 cache_images.go:84] Images are preloaded, skipping loading
	I1205 00:06:44.504843   42343 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.2 crio true true} ...
	I1205 00:06:44.504969   42343 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-168758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 00:06:44.505073   42343 ssh_runner.go:195] Run: crio config
	I1205 00:06:44.551500   42343 cni.go:84] Creating CNI manager for ""
	I1205 00:06:44.551522   42343 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 00:06:44.551531   42343 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 00:06:44.551558   42343 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-168758 NodeName:ha-168758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 00:06:44.551676   42343 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-168758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 00:06:44.551699   42343 kube-vip.go:115] generating kube-vip config ...
	I1205 00:06:44.551763   42343 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 00:06:44.562848   42343 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 00:06:44.562964   42343 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 00:06:44.563014   42343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 00:06:44.572015   42343 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 00:06:44.572064   42343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 00:06:44.580538   42343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 00:06:44.595789   42343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 00:06:44.611063   42343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 00:06:44.626623   42343 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 00:06:44.642458   42343 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 00:06:44.647067   42343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:06:44.787487   42343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:06:44.801541   42343 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758 for IP: 192.168.39.246
	I1205 00:06:44.801562   42343 certs.go:194] generating shared ca certs ...
	I1205 00:06:44.801588   42343 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:06:44.801724   42343 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1205 00:06:44.801765   42343 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1205 00:06:44.801775   42343 certs.go:256] generating profile certs ...
	I1205 00:06:44.801848   42343 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/client.key
	I1205 00:06:44.801874   42343 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.bd26fa17
	I1205 00:06:44.801892   42343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.bd26fa17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.70 192.168.39.211 192.168.39.254]
	I1205 00:06:44.872877   42343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.bd26fa17 ...
	I1205 00:06:44.872906   42343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.bd26fa17: {Name:mkfc8c2d0eb166a47f30bbdc85d581d701b23d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:06:44.873072   42343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.bd26fa17 ...
	I1205 00:06:44.873083   42343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.bd26fa17: {Name:mk0625ec44fad7ec5524605fa122e4306d4c6dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:06:44.873165   42343 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt.bd26fa17 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt
	I1205 00:06:44.873327   42343 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key.bd26fa17 -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key
	I1205 00:06:44.873450   42343 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key
	I1205 00:06:44.873464   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 00:06:44.873477   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 00:06:44.873487   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 00:06:44.873500   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 00:06:44.873510   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 00:06:44.873522   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 00:06:44.873531   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 00:06:44.873543   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 00:06:44.873583   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1205 00:06:44.873610   42343 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1205 00:06:44.873619   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 00:06:44.873638   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1205 00:06:44.873659   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1205 00:06:44.873679   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1205 00:06:44.873721   42343 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:06:44.873745   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1205 00:06:44.873758   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1205 00:06:44.873771   42343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:06:44.874401   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 00:06:44.898496   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 00:06:44.921108   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 00:06:44.943429   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 00:06:44.965691   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 00:06:44.987885   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 00:06:45.011390   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 00:06:45.034167   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/ha-168758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 00:06:45.057443   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1205 00:06:45.079893   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1205 00:06:45.101792   42343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 00:06:45.124012   42343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 00:06:45.139176   42343 ssh_runner.go:195] Run: openssl version
	I1205 00:06:45.144597   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1205 00:06:45.154547   42343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1205 00:06:45.159170   42343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:06:45.159254   42343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1205 00:06:45.164742   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1205 00:06:45.173383   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1205 00:06:45.183023   42343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1205 00:06:45.186952   42343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:06:45.187013   42343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1205 00:06:45.192156   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 00:06:45.200699   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 00:06:45.210176   42343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:06:45.214220   42343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:06:45.214259   42343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:06:45.219747   42343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 00:06:45.228171   42343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:06:45.232178   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 00:06:45.237426   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 00:06:45.242697   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 00:06:45.247758   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 00:06:45.252927   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 00:06:45.258130   42343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 00:06:45.263094   42343 kubeadm.go:392] StartCluster: {Name:ha-168758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-168758 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.118 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:06:45.263189   42343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 00:06:45.263224   42343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:06:45.297607   42343 cri.go:89] found id: "7d1d0ce2d5700424631af163ab9a351842fca45d03f414c7acc753b2b958d51f"
	I1205 00:06:45.297628   42343 cri.go:89] found id: "761fd6abd68ce8d95999e639827fe67e574790e0988a1d0d22c050fd9bc008b6"
	I1205 00:06:45.297632   42343 cri.go:89] found id: "556aaa4a72a7fbe61fdbaa788d2ab07d68962afabfe3b1a093b3c492bd7e7ae7"
	I1205 00:06:45.297634   42343 cri.go:89] found id: "acd6b28e9485915585e8692c817d6453863de58c605efb19623b9ec880e6575c"
	I1205 00:06:45.297637   42343 cri.go:89] found id: "775689612e70ebb0fdbf340e07560af0dde18ef88c74c14567e3eb5a4349d977"
	I1205 00:06:45.297640   42343 cri.go:89] found id: "ee81cf4f88af47df4f08e3327a33366dcb59490556c33f05f1c268522ef30862"
	I1205 00:06:45.297643   42343 cri.go:89] found id: "4110b2797a4e47dbf52a2e944e4798fdab882650a3ab36c9c030a3d2b33c8302"
	I1205 00:06:45.297646   42343 cri.go:89] found id: "54fe1d8daa04948b85c638f7d54813bf690ada55ab55e7305b5b43e5e763d9f2"
	I1205 00:06:45.297648   42343 cri.go:89] found id: "f9cdaa348b12d79fa65c549c7499ee9a672d9022ad56ac2849108d97328fffdd"
	I1205 00:06:45.297654   42343 cri.go:89] found id: "f83a840ab3b70186dd93889ec27991d82b1c16bfc8c51a4b25d7f948512b9579"
	I1205 00:06:45.297656   42343 cri.go:89] found id: "1729d6a102a14657de9363bd0e27af8840ed651d0c6e1dab7c37cfba9644aba3"
	I1205 00:06:45.297659   42343 cri.go:89] found id: "9ba9268cc4c554184c4103b5182537b04bcf62310ffd456809ff44e39139bfb8"
	I1205 00:06:45.297662   42343 cri.go:89] found id: "413e537381433d39ef4e5bb5fbed1129efa8e1c99d3b1d77f2edb94b16ce3619"
	I1205 00:06:45.297665   42343 cri.go:89] found id: ""
	I1205 00:06:45.297698   42343 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-168758 -n ha-168758
helpers_test.go:261: (dbg) Run:  kubectl --context ha-168758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-591074
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-591074
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-591074: exit status 82 (2m1.789709313s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-591074-m03"  ...
	* Stopping node "multinode-591074-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-591074" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-591074 --wait=true -v=8 --alsologtostderr
E1205 00:30:07.015817   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-591074 --wait=true -v=8 --alsologtostderr: (3m27.92244444s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-591074
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-591074 -n multinode-591074
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-591074 logs -n 25: (2.087470746s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m02:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile49849138/001/cp-test_multinode-591074-m02.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m02:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074:/home/docker/cp-test_multinode-591074-m02_multinode-591074.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074 sudo cat                                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m02_multinode-591074.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m02:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03:/home/docker/cp-test_multinode-591074-m02_multinode-591074-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074-m03 sudo cat                                   | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m02_multinode-591074-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp testdata/cp-test.txt                                                | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile49849138/001/cp-test_multinode-591074-m03.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074:/home/docker/cp-test_multinode-591074-m03_multinode-591074.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074 sudo cat                                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m03_multinode-591074.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02:/home/docker/cp-test_multinode-591074-m03_multinode-591074-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074-m02 sudo cat                                   | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m03_multinode-591074-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-591074 node stop m03                                                          | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	| node    | multinode-591074 node start                                                             | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:26 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-591074                                                                | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:26 UTC |                     |
	| stop    | -p multinode-591074                                                                     | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:26 UTC |                     |
	| start   | -p multinode-591074                                                                     | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:28 UTC | 05 Dec 24 00:32 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-591074                                                                | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:32 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 00:28:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 00:28:40.047829   54466 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:28:40.047942   54466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:28:40.047951   54466 out.go:358] Setting ErrFile to fd 2...
	I1205 00:28:40.047955   54466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:28:40.048133   54466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:28:40.048677   54466 out.go:352] Setting JSON to false
	I1205 00:28:40.049588   54466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7866,"bootTime":1733350654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 00:28:40.049680   54466 start.go:139] virtualization: kvm guest
	I1205 00:28:40.051889   54466 out.go:177] * [multinode-591074] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 00:28:40.053269   54466 notify.go:220] Checking for updates...
	I1205 00:28:40.053281   54466 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:28:40.054709   54466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:28:40.056153   54466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:28:40.057461   54466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:28:40.058831   54466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 00:28:40.060505   54466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:28:40.062303   54466 config.go:182] Loaded profile config "multinode-591074": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:28:40.062388   54466 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:28:40.062821   54466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:28:40.062859   54466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:28:40.078139   54466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I1205 00:28:40.078659   54466 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:28:40.079144   54466 main.go:141] libmachine: Using API Version  1
	I1205 00:28:40.079165   54466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:28:40.079484   54466 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:28:40.079651   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:28:40.115623   54466 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 00:28:40.116695   54466 start.go:297] selected driver: kvm2
	I1205 00:28:40.116706   54466 start.go:901] validating driver "kvm2" against &{Name:multinode-591074 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-591074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:28:40.116871   54466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:28:40.117204   54466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:28:40.117276   54466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 00:28:40.132090   54466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 00:28:40.132762   54466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 00:28:40.132794   54466 cni.go:84] Creating CNI manager for ""
	I1205 00:28:40.132843   54466 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 00:28:40.132899   54466 start.go:340] cluster config:
	{Name:multinode-591074 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-591074 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:28:40.133042   54466 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:28:40.134569   54466 out.go:177] * Starting "multinode-591074" primary control-plane node in "multinode-591074" cluster
	I1205 00:28:40.135767   54466 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:28:40.135802   54466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 00:28:40.135817   54466 cache.go:56] Caching tarball of preloaded images
	I1205 00:28:40.135876   54466 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 00:28:40.135888   54466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 00:28:40.136015   54466 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/config.json ...
	I1205 00:28:40.136195   54466 start.go:360] acquireMachinesLock for multinode-591074: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 00:28:40.136238   54466 start.go:364] duration metric: took 25.703µs to acquireMachinesLock for "multinode-591074"
	I1205 00:28:40.136256   54466 start.go:96] Skipping create...Using existing machine configuration
	I1205 00:28:40.136265   54466 fix.go:54] fixHost starting: 
	I1205 00:28:40.136526   54466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:28:40.136563   54466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:28:40.150701   54466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I1205 00:28:40.151089   54466 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:28:40.151532   54466 main.go:141] libmachine: Using API Version  1
	I1205 00:28:40.151554   54466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:28:40.151829   54466 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:28:40.152063   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:28:40.152217   54466 main.go:141] libmachine: (multinode-591074) Calling .GetState
	I1205 00:28:40.153911   54466 fix.go:112] recreateIfNeeded on multinode-591074: state=Running err=<nil>
	W1205 00:28:40.153963   54466 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 00:28:40.155692   54466 out.go:177] * Updating the running kvm2 "multinode-591074" VM ...
	I1205 00:28:40.156729   54466 machine.go:93] provisionDockerMachine start ...
	I1205 00:28:40.156747   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:28:40.156938   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.159424   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.159882   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.159913   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.160082   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.160266   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.160399   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.160527   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.160710   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:28:40.160927   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:28:40.160939   54466 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 00:28:40.262765   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-591074
	
	I1205 00:28:40.262790   54466 main.go:141] libmachine: (multinode-591074) Calling .GetMachineName
	I1205 00:28:40.263054   54466 buildroot.go:166] provisioning hostname "multinode-591074"
	I1205 00:28:40.263082   54466 main.go:141] libmachine: (multinode-591074) Calling .GetMachineName
	I1205 00:28:40.263266   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.266286   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.266644   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.266699   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.266845   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.266997   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.267120   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.267231   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.267432   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:28:40.267601   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:28:40.267612   54466 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-591074 && echo "multinode-591074" | sudo tee /etc/hostname
	I1205 00:28:40.381638   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-591074
	
	I1205 00:28:40.381672   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.384687   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.385127   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.385167   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.385330   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.385531   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.385718   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.385867   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.386056   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:28:40.386214   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:28:40.386231   54466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-591074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-591074/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-591074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 00:28:40.486759   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 00:28:40.486787   54466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1205 00:28:40.486820   54466 buildroot.go:174] setting up certificates
	I1205 00:28:40.486831   54466 provision.go:84] configureAuth start
	I1205 00:28:40.486845   54466 main.go:141] libmachine: (multinode-591074) Calling .GetMachineName
	I1205 00:28:40.487120   54466 main.go:141] libmachine: (multinode-591074) Calling .GetIP
	I1205 00:28:40.489787   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.490303   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.490332   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.490516   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.492706   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.493005   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.493046   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.493147   54466 provision.go:143] copyHostCerts
	I1205 00:28:40.493169   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:28:40.493208   54466 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1205 00:28:40.493220   54466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:28:40.493314   54466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1205 00:28:40.493420   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:28:40.493439   54466 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1205 00:28:40.493446   54466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:28:40.493478   54466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1205 00:28:40.493534   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:28:40.493551   54466 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1205 00:28:40.493557   54466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:28:40.493578   54466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1205 00:28:40.493636   54466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.multinode-591074 san=[127.0.0.1 192.168.39.176 localhost minikube multinode-591074]
	I1205 00:28:40.681260   54466 provision.go:177] copyRemoteCerts
	I1205 00:28:40.681326   54466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 00:28:40.681348   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.684000   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.684376   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.684418   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.684572   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.684756   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.684904   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.685051   54466 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:28:40.764035   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 00:28:40.764110   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 00:28:40.793860   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 00:28:40.793921   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 00:28:40.816708   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 00:28:40.816794   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 00:28:40.839102   54466 provision.go:87] duration metric: took 352.256196ms to configureAuth
	I1205 00:28:40.839131   54466 buildroot.go:189] setting minikube options for container-runtime
	I1205 00:28:40.839391   54466 config.go:182] Loaded profile config "multinode-591074": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:28:40.839461   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.842269   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.842648   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.842673   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.842833   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.843023   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.843186   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.843344   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.843522   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:28:40.843698   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:28:40.843718   54466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 00:30:11.601992   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 00:30:11.602021   54466 machine.go:96] duration metric: took 1m31.4452786s to provisionDockerMachine
	I1205 00:30:11.602032   54466 start.go:293] postStartSetup for "multinode-591074" (driver="kvm2")
	I1205 00:30:11.602043   54466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 00:30:11.602058   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.602356   54466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 00:30:11.602378   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:30:11.605202   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.605702   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.605726   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.605840   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:30:11.606020   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.606175   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:30:11.606345   54466 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:30:11.684627   54466 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 00:30:11.688621   54466 command_runner.go:130] > NAME=Buildroot
	I1205 00:30:11.688644   54466 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1205 00:30:11.688651   54466 command_runner.go:130] > ID=buildroot
	I1205 00:30:11.688658   54466 command_runner.go:130] > VERSION_ID=2023.02.9
	I1205 00:30:11.688664   54466 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1205 00:30:11.688707   54466 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 00:30:11.688724   54466 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1205 00:30:11.688782   54466 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1205 00:30:11.688876   54466 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1205 00:30:11.688886   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1205 00:30:11.689004   54466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 00:30:11.697889   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:30:11.720349   54466 start.go:296] duration metric: took 118.303665ms for postStartSetup
	I1205 00:30:11.720402   54466 fix.go:56] duration metric: took 1m31.58412026s for fixHost
	I1205 00:30:11.720426   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:30:11.723034   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.723545   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.723585   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.723761   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:30:11.723967   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.724100   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.724245   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:30:11.724423   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:30:11.724637   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:30:11.724652   54466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 00:30:11.826710   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733358611.808506478
	
	I1205 00:30:11.826733   54466 fix.go:216] guest clock: 1733358611.808506478
	I1205 00:30:11.826740   54466 fix.go:229] Guest: 2024-12-05 00:30:11.808506478 +0000 UTC Remote: 2024-12-05 00:30:11.720407604 +0000 UTC m=+91.710154212 (delta=88.098874ms)
	I1205 00:30:11.826771   54466 fix.go:200] guest clock delta is within tolerance: 88.098874ms
	I1205 00:30:11.826776   54466 start.go:83] releasing machines lock for "multinode-591074", held for 1m31.690526862s
	I1205 00:30:11.826793   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.827027   54466 main.go:141] libmachine: (multinode-591074) Calling .GetIP
	I1205 00:30:11.829627   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.829982   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.830008   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.830138   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.830628   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.830771   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.830882   54466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 00:30:11.830919   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:30:11.830990   54466 ssh_runner.go:195] Run: cat /version.json
	I1205 00:30:11.831007   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:30:11.833438   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.833734   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.833860   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.833888   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.833999   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:30:11.834180   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.834184   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.834212   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.834341   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:30:11.834360   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:30:11.834492   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.834540   54466 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:30:11.834610   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:30:11.834716   54466 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:30:11.914279   54466 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1205 00:30:11.915087   54466 ssh_runner.go:195] Run: systemctl --version
	I1205 00:30:11.951559   54466 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 00:30:11.951617   54466 command_runner.go:130] > systemd 252 (252)
	I1205 00:30:11.951648   54466 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1205 00:30:11.951708   54466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 00:30:12.116125   54466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 00:30:12.126607   54466 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 00:30:12.126956   54466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 00:30:12.127051   54466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 00:30:12.136496   54466 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 00:30:12.136517   54466 start.go:495] detecting cgroup driver to use...
	I1205 00:30:12.136579   54466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 00:30:12.152523   54466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 00:30:12.165813   54466 docker.go:217] disabling cri-docker service (if available) ...
	I1205 00:30:12.165873   54466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 00:30:12.178540   54466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 00:30:12.191038   54466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 00:30:12.326354   54466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 00:30:12.465084   54466 docker.go:233] disabling docker service ...
	I1205 00:30:12.465164   54466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 00:30:12.497659   54466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 00:30:12.511275   54466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 00:30:12.659401   54466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 00:30:12.795942   54466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 00:30:12.809840   54466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 00:30:12.828727   54466 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 00:30:12.828779   54466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 00:30:12.828831   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.839340   54466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 00:30:12.839414   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.849222   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.858948   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.868760   54466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 00:30:12.878540   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.888153   54466 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.898510   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.908228   54466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 00:30:12.916992   54466 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 00:30:12.917073   54466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 00:30:12.925494   54466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:30:13.055416   54466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 00:30:17.598366   54466 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.542915423s)
	I1205 00:30:17.598398   54466 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 00:30:17.598441   54466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 00:30:17.603253   54466 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 00:30:17.603283   54466 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 00:30:17.603295   54466 command_runner.go:130] > Device: 0,22	Inode: 1289        Links: 1
	I1205 00:30:17.603306   54466 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 00:30:17.603320   54466 command_runner.go:130] > Access: 2024-12-05 00:30:17.477989472 +0000
	I1205 00:30:17.603329   54466 command_runner.go:130] > Modify: 2024-12-05 00:30:17.477989472 +0000
	I1205 00:30:17.603334   54466 command_runner.go:130] > Change: 2024-12-05 00:30:17.477989472 +0000
	I1205 00:30:17.603338   54466 command_runner.go:130] >  Birth: -
	I1205 00:30:17.603361   54466 start.go:563] Will wait 60s for crictl version
	I1205 00:30:17.603444   54466 ssh_runner.go:195] Run: which crictl
	I1205 00:30:17.607117   54466 command_runner.go:130] > /usr/bin/crictl
	I1205 00:30:17.607218   54466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 00:30:17.641935   54466 command_runner.go:130] > Version:  0.1.0
	I1205 00:30:17.642026   54466 command_runner.go:130] > RuntimeName:  cri-o
	I1205 00:30:17.642034   54466 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1205 00:30:17.642042   54466 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 00:30:17.643306   54466 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 00:30:17.643379   54466 ssh_runner.go:195] Run: crio --version
	I1205 00:30:17.671316   54466 command_runner.go:130] > crio version 1.29.1
	I1205 00:30:17.671347   54466 command_runner.go:130] > Version:        1.29.1
	I1205 00:30:17.671354   54466 command_runner.go:130] > GitCommit:      unknown
	I1205 00:30:17.671358   54466 command_runner.go:130] > GitCommitDate:  unknown
	I1205 00:30:17.671362   54466 command_runner.go:130] > GitTreeState:   clean
	I1205 00:30:17.671368   54466 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 00:30:17.671372   54466 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 00:30:17.671376   54466 command_runner.go:130] > Compiler:       gc
	I1205 00:30:17.671380   54466 command_runner.go:130] > Platform:       linux/amd64
	I1205 00:30:17.671384   54466 command_runner.go:130] > Linkmode:       dynamic
	I1205 00:30:17.671389   54466 command_runner.go:130] > BuildTags:      
	I1205 00:30:17.671393   54466 command_runner.go:130] >   containers_image_ostree_stub
	I1205 00:30:17.671397   54466 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 00:30:17.671401   54466 command_runner.go:130] >   btrfs_noversion
	I1205 00:30:17.671406   54466 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 00:30:17.671411   54466 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 00:30:17.671415   54466 command_runner.go:130] >   seccomp
	I1205 00:30:17.671419   54466 command_runner.go:130] > LDFlags:          unknown
	I1205 00:30:17.671424   54466 command_runner.go:130] > SeccompEnabled:   true
	I1205 00:30:17.671428   54466 command_runner.go:130] > AppArmorEnabled:  false
	I1205 00:30:17.671523   54466 ssh_runner.go:195] Run: crio --version
	I1205 00:30:17.698287   54466 command_runner.go:130] > crio version 1.29.1
	I1205 00:30:17.698311   54466 command_runner.go:130] > Version:        1.29.1
	I1205 00:30:17.698318   54466 command_runner.go:130] > GitCommit:      unknown
	I1205 00:30:17.698323   54466 command_runner.go:130] > GitCommitDate:  unknown
	I1205 00:30:17.698326   54466 command_runner.go:130] > GitTreeState:   clean
	I1205 00:30:17.698333   54466 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 00:30:17.698337   54466 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 00:30:17.698340   54466 command_runner.go:130] > Compiler:       gc
	I1205 00:30:17.698345   54466 command_runner.go:130] > Platform:       linux/amd64
	I1205 00:30:17.698349   54466 command_runner.go:130] > Linkmode:       dynamic
	I1205 00:30:17.698353   54466 command_runner.go:130] > BuildTags:      
	I1205 00:30:17.698357   54466 command_runner.go:130] >   containers_image_ostree_stub
	I1205 00:30:17.698361   54466 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 00:30:17.698365   54466 command_runner.go:130] >   btrfs_noversion
	I1205 00:30:17.698369   54466 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 00:30:17.698373   54466 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 00:30:17.698376   54466 command_runner.go:130] >   seccomp
	I1205 00:30:17.698380   54466 command_runner.go:130] > LDFlags:          unknown
	I1205 00:30:17.698384   54466 command_runner.go:130] > SeccompEnabled:   true
	I1205 00:30:17.698388   54466 command_runner.go:130] > AppArmorEnabled:  false
	I1205 00:30:17.701640   54466 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 00:30:17.703249   54466 main.go:141] libmachine: (multinode-591074) Calling .GetIP
	I1205 00:30:17.706043   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:17.706401   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:17.706433   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:17.706693   54466 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 00:30:17.710810   54466 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1205 00:30:17.710889   54466 kubeadm.go:883] updating cluster {Name:multinode-591074 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-591074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 00:30:17.711003   54466 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:30:17.711047   54466 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:30:17.760061   54466 command_runner.go:130] > {
	I1205 00:30:17.760089   54466 command_runner.go:130] >   "images": [
	I1205 00:30:17.760094   54466 command_runner.go:130] >     {
	I1205 00:30:17.760101   54466 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 00:30:17.760106   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760112   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 00:30:17.760116   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760119   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760132   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 00:30:17.760144   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 00:30:17.760151   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760158   54466 command_runner.go:130] >       "size": "94965812",
	I1205 00:30:17.760174   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760188   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760195   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760199   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760203   54466 command_runner.go:130] >     },
	I1205 00:30:17.760207   54466 command_runner.go:130] >     {
	I1205 00:30:17.760212   54466 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 00:30:17.760218   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760226   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 00:30:17.760233   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760240   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760252   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 00:30:17.760267   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 00:30:17.760273   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760281   54466 command_runner.go:130] >       "size": "94958644",
	I1205 00:30:17.760287   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760298   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760304   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760310   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760317   54466 command_runner.go:130] >     },
	I1205 00:30:17.760323   54466 command_runner.go:130] >     {
	I1205 00:30:17.760336   54466 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 00:30:17.760344   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760352   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 00:30:17.760359   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760365   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760376   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 00:30:17.760388   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 00:30:17.760392   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760401   54466 command_runner.go:130] >       "size": "1363676",
	I1205 00:30:17.760411   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760418   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760427   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760437   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760446   54466 command_runner.go:130] >     },
	I1205 00:30:17.760454   54466 command_runner.go:130] >     {
	I1205 00:30:17.760463   54466 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 00:30:17.760473   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760478   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 00:30:17.760483   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760494   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760507   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 00:30:17.760525   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 00:30:17.760535   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760541   54466 command_runner.go:130] >       "size": "31470524",
	I1205 00:30:17.760550   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760559   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760563   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760568   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760577   54466 command_runner.go:130] >     },
	I1205 00:30:17.760584   54466 command_runner.go:130] >     {
	I1205 00:30:17.760597   54466 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 00:30:17.760608   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760616   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 00:30:17.760625   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760632   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760644   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 00:30:17.760652   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 00:30:17.760657   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760664   54466 command_runner.go:130] >       "size": "63273227",
	I1205 00:30:17.760671   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760678   54466 command_runner.go:130] >       "username": "nonroot",
	I1205 00:30:17.760701   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760709   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760715   54466 command_runner.go:130] >     },
	I1205 00:30:17.760722   54466 command_runner.go:130] >     {
	I1205 00:30:17.760732   54466 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 00:30:17.760738   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760754   54466 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 00:30:17.760762   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760769   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760783   54466 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 00:30:17.760798   54466 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 00:30:17.760812   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760821   54466 command_runner.go:130] >       "size": "149009664",
	I1205 00:30:17.760826   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.760835   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.760842   54466 command_runner.go:130] >       },
	I1205 00:30:17.760852   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760864   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760874   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760880   54466 command_runner.go:130] >     },
	I1205 00:30:17.760886   54466 command_runner.go:130] >     {
	I1205 00:30:17.760910   54466 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 00:30:17.760922   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760930   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 00:30:17.760940   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760946   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760965   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 00:30:17.760980   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 00:30:17.760993   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761000   54466 command_runner.go:130] >       "size": "95274464",
	I1205 00:30:17.761005   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.761012   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.761018   54466 command_runner.go:130] >       },
	I1205 00:30:17.761028   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761038   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761045   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.761053   54466 command_runner.go:130] >     },
	I1205 00:30:17.761058   54466 command_runner.go:130] >     {
	I1205 00:30:17.761072   54466 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 00:30:17.761078   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.761087   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 00:30:17.761092   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761103   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.761126   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 00:30:17.761142   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 00:30:17.761148   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761155   54466 command_runner.go:130] >       "size": "89474374",
	I1205 00:30:17.761161   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.761166   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.761170   54466 command_runner.go:130] >       },
	I1205 00:30:17.761174   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761180   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761188   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.761193   54466 command_runner.go:130] >     },
	I1205 00:30:17.761199   54466 command_runner.go:130] >     {
	I1205 00:30:17.761208   54466 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 00:30:17.761214   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.761222   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 00:30:17.761227   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761237   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.761247   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 00:30:17.761256   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 00:30:17.761259   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761265   54466 command_runner.go:130] >       "size": "92783513",
	I1205 00:30:17.761271   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.761277   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761284   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761290   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.761295   54466 command_runner.go:130] >     },
	I1205 00:30:17.761301   54466 command_runner.go:130] >     {
	I1205 00:30:17.761310   54466 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 00:30:17.761318   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.761326   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 00:30:17.761335   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761341   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.761354   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 00:30:17.761369   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 00:30:17.761378   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761385   54466 command_runner.go:130] >       "size": "68457798",
	I1205 00:30:17.761394   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.761401   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.761410   54466 command_runner.go:130] >       },
	I1205 00:30:17.761416   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761425   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761430   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.761436   54466 command_runner.go:130] >     },
	I1205 00:30:17.761442   54466 command_runner.go:130] >     {
	I1205 00:30:17.761456   54466 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 00:30:17.761465   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.761473   54466 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 00:30:17.761481   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761488   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.761502   54466 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 00:30:17.761519   54466 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 00:30:17.761528   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761535   54466 command_runner.go:130] >       "size": "742080",
	I1205 00:30:17.761545   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.761551   54466 command_runner.go:130] >         "value": "65535"
	I1205 00:30:17.761565   54466 command_runner.go:130] >       },
	I1205 00:30:17.761576   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761583   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761592   54466 command_runner.go:130] >       "pinned": true
	I1205 00:30:17.761597   54466 command_runner.go:130] >     }
	I1205 00:30:17.761604   54466 command_runner.go:130] >   ]
	I1205 00:30:17.761607   54466 command_runner.go:130] > }
	I1205 00:30:17.761861   54466 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:30:17.761878   54466 crio.go:433] Images already preloaded, skipping extraction
	I1205 00:30:17.761936   54466 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:30:17.798485   54466 command_runner.go:130] > {
	I1205 00:30:17.798510   54466 command_runner.go:130] >   "images": [
	I1205 00:30:17.798515   54466 command_runner.go:130] >     {
	I1205 00:30:17.798522   54466 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 00:30:17.798527   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798532   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 00:30:17.798536   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798540   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798549   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 00:30:17.798556   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 00:30:17.798560   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798565   54466 command_runner.go:130] >       "size": "94965812",
	I1205 00:30:17.798570   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798574   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.798590   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798594   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798597   54466 command_runner.go:130] >     },
	I1205 00:30:17.798601   54466 command_runner.go:130] >     {
	I1205 00:30:17.798610   54466 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 00:30:17.798614   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798619   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 00:30:17.798622   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798634   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798642   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 00:30:17.798649   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 00:30:17.798653   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798658   54466 command_runner.go:130] >       "size": "94958644",
	I1205 00:30:17.798662   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798669   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.798673   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798677   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798680   54466 command_runner.go:130] >     },
	I1205 00:30:17.798683   54466 command_runner.go:130] >     {
	I1205 00:30:17.798689   54466 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 00:30:17.798694   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798699   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 00:30:17.798702   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798706   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798714   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 00:30:17.798721   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 00:30:17.798727   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798731   54466 command_runner.go:130] >       "size": "1363676",
	I1205 00:30:17.798736   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798740   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.798747   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798751   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798754   54466 command_runner.go:130] >     },
	I1205 00:30:17.798758   54466 command_runner.go:130] >     {
	I1205 00:30:17.798766   54466 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 00:30:17.798770   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798775   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 00:30:17.798779   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798783   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798791   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 00:30:17.798806   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 00:30:17.798817   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798824   54466 command_runner.go:130] >       "size": "31470524",
	I1205 00:30:17.798828   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798834   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.798839   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798845   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798849   54466 command_runner.go:130] >     },
	I1205 00:30:17.798854   54466 command_runner.go:130] >     {
	I1205 00:30:17.798861   54466 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 00:30:17.798867   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798872   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 00:30:17.798879   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798883   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798893   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 00:30:17.798902   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 00:30:17.798908   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798912   54466 command_runner.go:130] >       "size": "63273227",
	I1205 00:30:17.798918   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798922   54466 command_runner.go:130] >       "username": "nonroot",
	I1205 00:30:17.798926   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798932   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798935   54466 command_runner.go:130] >     },
	I1205 00:30:17.798941   54466 command_runner.go:130] >     {
	I1205 00:30:17.798947   54466 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 00:30:17.798953   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798958   54466 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 00:30:17.798963   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798967   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798976   54466 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 00:30:17.798985   54466 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 00:30:17.798991   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798995   54466 command_runner.go:130] >       "size": "149009664",
	I1205 00:30:17.799001   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799010   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.799017   54466 command_runner.go:130] >       },
	I1205 00:30:17.799022   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799031   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799038   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799041   54466 command_runner.go:130] >     },
	I1205 00:30:17.799047   54466 command_runner.go:130] >     {
	I1205 00:30:17.799053   54466 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 00:30:17.799060   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799065   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 00:30:17.799071   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799075   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799085   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 00:30:17.799094   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 00:30:17.799100   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799104   54466 command_runner.go:130] >       "size": "95274464",
	I1205 00:30:17.799110   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799114   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.799120   54466 command_runner.go:130] >       },
	I1205 00:30:17.799124   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799130   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799134   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799139   54466 command_runner.go:130] >     },
	I1205 00:30:17.799143   54466 command_runner.go:130] >     {
	I1205 00:30:17.799151   54466 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 00:30:17.799157   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799162   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 00:30:17.799168   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799172   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799196   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 00:30:17.799206   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 00:30:17.799209   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799213   54466 command_runner.go:130] >       "size": "89474374",
	I1205 00:30:17.799222   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799228   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.799231   54466 command_runner.go:130] >       },
	I1205 00:30:17.799238   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799242   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799249   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799252   54466 command_runner.go:130] >     },
	I1205 00:30:17.799258   54466 command_runner.go:130] >     {
	I1205 00:30:17.799272   54466 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 00:30:17.799278   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799283   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 00:30:17.799289   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799293   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799303   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 00:30:17.799314   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 00:30:17.799320   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799324   54466 command_runner.go:130] >       "size": "92783513",
	I1205 00:30:17.799330   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.799334   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799340   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799344   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799349   54466 command_runner.go:130] >     },
	I1205 00:30:17.799352   54466 command_runner.go:130] >     {
	I1205 00:30:17.799359   54466 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 00:30:17.799365   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799370   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 00:30:17.799376   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799380   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799390   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 00:30:17.799399   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 00:30:17.799405   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799409   54466 command_runner.go:130] >       "size": "68457798",
	I1205 00:30:17.799415   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799424   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.799430   54466 command_runner.go:130] >       },
	I1205 00:30:17.799434   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799440   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799444   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799450   54466 command_runner.go:130] >     },
	I1205 00:30:17.799453   54466 command_runner.go:130] >     {
	I1205 00:30:17.799461   54466 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 00:30:17.799468   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799472   54466 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 00:30:17.799478   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799482   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799489   54466 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 00:30:17.799500   54466 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 00:30:17.799506   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799510   54466 command_runner.go:130] >       "size": "742080",
	I1205 00:30:17.799516   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799521   54466 command_runner.go:130] >         "value": "65535"
	I1205 00:30:17.799526   54466 command_runner.go:130] >       },
	I1205 00:30:17.799530   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799536   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799542   54466 command_runner.go:130] >       "pinned": true
	I1205 00:30:17.799548   54466 command_runner.go:130] >     }
	I1205 00:30:17.799553   54466 command_runner.go:130] >   ]
	I1205 00:30:17.799558   54466 command_runner.go:130] > }
	I1205 00:30:17.799689   54466 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:30:17.799701   54466 cache_images.go:84] Images are preloaded, skipping loading
	I1205 00:30:17.799708   54466 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.31.2 crio true true} ...
	I1205 00:30:17.799809   54466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-591074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-591074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 00:30:17.799873   54466 ssh_runner.go:195] Run: crio config
	I1205 00:30:17.841237   54466 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 00:30:17.841283   54466 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 00:30:17.841293   54466 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 00:30:17.841299   54466 command_runner.go:130] > #
	I1205 00:30:17.841308   54466 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 00:30:17.841315   54466 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 00:30:17.841321   54466 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 00:30:17.841328   54466 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 00:30:17.841331   54466 command_runner.go:130] > # reload'.
	I1205 00:30:17.841337   54466 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 00:30:17.841347   54466 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 00:30:17.841361   54466 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 00:30:17.841371   54466 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 00:30:17.841378   54466 command_runner.go:130] > [crio]
	I1205 00:30:17.841386   54466 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 00:30:17.841393   54466 command_runner.go:130] > # containers images, in this directory.
	I1205 00:30:17.841399   54466 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 00:30:17.841418   54466 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 00:30:17.841432   54466 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 00:30:17.841446   54466 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 00:30:17.841581   54466 command_runner.go:130] > # imagestore = ""
	I1205 00:30:17.841606   54466 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 00:30:17.841617   54466 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 00:30:17.841729   54466 command_runner.go:130] > storage_driver = "overlay"
	I1205 00:30:17.841745   54466 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 00:30:17.841754   54466 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 00:30:17.841761   54466 command_runner.go:130] > storage_option = [
	I1205 00:30:17.841937   54466 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 00:30:17.842304   54466 command_runner.go:130] > ]
	I1205 00:30:17.842345   54466 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 00:30:17.842357   54466 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 00:30:17.842408   54466 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 00:30:17.842427   54466 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 00:30:17.842437   54466 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 00:30:17.842447   54466 command_runner.go:130] > # always happen on a node reboot
	I1205 00:30:17.842675   54466 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 00:30:17.842699   54466 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 00:30:17.842710   54466 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 00:30:17.842717   54466 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 00:30:17.842843   54466 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1205 00:30:17.842861   54466 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 00:30:17.842873   54466 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 00:30:17.843013   54466 command_runner.go:130] > # internal_wipe = true
	I1205 00:30:17.843026   54466 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 00:30:17.843031   54466 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 00:30:17.843164   54466 command_runner.go:130] > # internal_repair = false
	I1205 00:30:17.843179   54466 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 00:30:17.843189   54466 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 00:30:17.843198   54466 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 00:30:17.843379   54466 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 00:30:17.843391   54466 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 00:30:17.843396   54466 command_runner.go:130] > [crio.api]
	I1205 00:30:17.843401   54466 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 00:30:17.843562   54466 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 00:30:17.843576   54466 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 00:30:17.843834   54466 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 00:30:17.843853   54466 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 00:30:17.843862   54466 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 00:30:17.844053   54466 command_runner.go:130] > # stream_port = "0"
	I1205 00:30:17.844073   54466 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 00:30:17.844174   54466 command_runner.go:130] > # stream_enable_tls = false
	I1205 00:30:17.844193   54466 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 00:30:17.844332   54466 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 00:30:17.844352   54466 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 00:30:17.844363   54466 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 00:30:17.844369   54466 command_runner.go:130] > # minutes.
	I1205 00:30:17.844461   54466 command_runner.go:130] > # stream_tls_cert = ""
	I1205 00:30:17.844482   54466 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 00:30:17.844495   54466 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 00:30:17.844672   54466 command_runner.go:130] > # stream_tls_key = ""
	I1205 00:30:17.844687   54466 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 00:30:17.844698   54466 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 00:30:17.844724   54466 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 00:30:17.844840   54466 command_runner.go:130] > # stream_tls_ca = ""
	I1205 00:30:17.844858   54466 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 00:30:17.844967   54466 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 00:30:17.844984   54466 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 00:30:17.845072   54466 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 00:30:17.845086   54466 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 00:30:17.845095   54466 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 00:30:17.845104   54466 command_runner.go:130] > [crio.runtime]
	I1205 00:30:17.845118   54466 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 00:30:17.845128   54466 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 00:30:17.845138   54466 command_runner.go:130] > # "nofile=1024:2048"
	I1205 00:30:17.845151   54466 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 00:30:17.845181   54466 command_runner.go:130] > # default_ulimits = [
	I1205 00:30:17.845308   54466 command_runner.go:130] > # ]
	I1205 00:30:17.845335   54466 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 00:30:17.845489   54466 command_runner.go:130] > # no_pivot = false
	I1205 00:30:17.845564   54466 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 00:30:17.845581   54466 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 00:30:17.845840   54466 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 00:30:17.845856   54466 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 00:30:17.845864   54466 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 00:30:17.845876   54466 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 00:30:17.845887   54466 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 00:30:17.845895   54466 command_runner.go:130] > # Cgroup setting for conmon
	I1205 00:30:17.845908   54466 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 00:30:17.845917   54466 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 00:30:17.845949   54466 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 00:30:17.845961   54466 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 00:30:17.845972   54466 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 00:30:17.845979   54466 command_runner.go:130] > conmon_env = [
	I1205 00:30:17.845991   54466 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 00:30:17.846000   54466 command_runner.go:130] > ]
	I1205 00:30:17.846013   54466 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 00:30:17.846025   54466 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 00:30:17.846037   54466 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 00:30:17.846045   54466 command_runner.go:130] > # default_env = [
	I1205 00:30:17.846051   54466 command_runner.go:130] > # ]
	I1205 00:30:17.846061   54466 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 00:30:17.846077   54466 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 00:30:17.846084   54466 command_runner.go:130] > # selinux = false
	I1205 00:30:17.846099   54466 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 00:30:17.846112   54466 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 00:30:17.846126   54466 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 00:30:17.846136   54466 command_runner.go:130] > # seccomp_profile = ""
	I1205 00:30:17.846148   54466 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 00:30:17.846162   54466 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 00:30:17.846175   54466 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 00:30:17.846186   54466 command_runner.go:130] > # which might increase security.
	I1205 00:30:17.846198   54466 command_runner.go:130] > # This option is currently deprecated,
	I1205 00:30:17.846212   54466 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1205 00:30:17.846223   54466 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 00:30:17.846235   54466 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 00:30:17.846249   54466 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 00:30:17.846263   54466 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 00:30:17.846279   54466 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 00:30:17.846298   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.846372   54466 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 00:30:17.846386   54466 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 00:30:17.846396   54466 command_runner.go:130] > # the cgroup blockio controller.
	I1205 00:30:17.846421   54466 command_runner.go:130] > # blockio_config_file = ""
	I1205 00:30:17.846436   54466 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 00:30:17.846444   54466 command_runner.go:130] > # blockio parameters.
	I1205 00:30:17.846459   54466 command_runner.go:130] > # blockio_reload = false
	I1205 00:30:17.846473   54466 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 00:30:17.846482   54466 command_runner.go:130] > # irqbalance daemon.
	I1205 00:30:17.846494   54466 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 00:30:17.846508   54466 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 00:30:17.846523   54466 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 00:30:17.846538   54466 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 00:30:17.846555   54466 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 00:30:17.846569   54466 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 00:30:17.846578   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.846588   54466 command_runner.go:130] > # rdt_config_file = ""
	I1205 00:30:17.846598   54466 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 00:30:17.846610   54466 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 00:30:17.846650   54466 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 00:30:17.846660   54466 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 00:30:17.846671   54466 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 00:30:17.846685   54466 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 00:30:17.846695   54466 command_runner.go:130] > # will be added.
	I1205 00:30:17.846705   54466 command_runner.go:130] > # default_capabilities = [
	I1205 00:30:17.846714   54466 command_runner.go:130] > # 	"CHOWN",
	I1205 00:30:17.846721   54466 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 00:30:17.846731   54466 command_runner.go:130] > # 	"FSETID",
	I1205 00:30:17.846738   54466 command_runner.go:130] > # 	"FOWNER",
	I1205 00:30:17.846748   54466 command_runner.go:130] > # 	"SETGID",
	I1205 00:30:17.846756   54466 command_runner.go:130] > # 	"SETUID",
	I1205 00:30:17.846764   54466 command_runner.go:130] > # 	"SETPCAP",
	I1205 00:30:17.846774   54466 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 00:30:17.846780   54466 command_runner.go:130] > # 	"KILL",
	I1205 00:30:17.846789   54466 command_runner.go:130] > # ]
	I1205 00:30:17.846803   54466 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 00:30:17.846823   54466 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 00:30:17.846836   54466 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 00:30:17.846850   54466 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 00:30:17.846930   54466 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 00:30:17.846944   54466 command_runner.go:130] > default_sysctls = [
	I1205 00:30:17.846952   54466 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 00:30:17.846961   54466 command_runner.go:130] > ]
	I1205 00:30:17.846971   54466 command_runner.go:130] > # List of devices on the host that a
	I1205 00:30:17.846985   54466 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 00:30:17.846994   54466 command_runner.go:130] > # allowed_devices = [
	I1205 00:30:17.847000   54466 command_runner.go:130] > # 	"/dev/fuse",
	I1205 00:30:17.847005   54466 command_runner.go:130] > # ]
	I1205 00:30:17.847013   54466 command_runner.go:130] > # List of additional devices. specified as
	I1205 00:30:17.847029   54466 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 00:30:17.847042   54466 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 00:30:17.847052   54466 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 00:30:17.847063   54466 command_runner.go:130] > # additional_devices = [
	I1205 00:30:17.847072   54466 command_runner.go:130] > # ]
	I1205 00:30:17.847081   54466 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 00:30:17.847099   54466 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 00:30:17.847109   54466 command_runner.go:130] > # 	"/etc/cdi",
	I1205 00:30:17.847116   54466 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 00:30:17.847124   54466 command_runner.go:130] > # ]
	I1205 00:30:17.847136   54466 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 00:30:17.847149   54466 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 00:30:17.847156   54466 command_runner.go:130] > # Defaults to false.
	I1205 00:30:17.847168   54466 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 00:30:17.847182   54466 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 00:30:17.847196   54466 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 00:30:17.847206   54466 command_runner.go:130] > # hooks_dir = [
	I1205 00:30:17.847219   54466 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 00:30:17.847227   54466 command_runner.go:130] > # ]
	I1205 00:30:17.847238   54466 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 00:30:17.847330   54466 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 00:30:17.847359   54466 command_runner.go:130] > # its default mounts from the following two files:
	I1205 00:30:17.847365   54466 command_runner.go:130] > #
	I1205 00:30:17.847373   54466 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 00:30:17.847383   54466 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 00:30:17.847392   54466 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 00:30:17.847401   54466 command_runner.go:130] > #
	I1205 00:30:17.847413   54466 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 00:30:17.847427   54466 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 00:30:17.847442   54466 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 00:30:17.847454   54466 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 00:30:17.847462   54466 command_runner.go:130] > #
	I1205 00:30:17.847471   54466 command_runner.go:130] > # default_mounts_file = ""
	I1205 00:30:17.847482   54466 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 00:30:17.847494   54466 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 00:30:17.847504   54466 command_runner.go:130] > pids_limit = 1024
	I1205 00:30:17.847517   54466 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 00:30:17.847531   54466 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 00:30:17.847546   54466 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 00:30:17.847564   54466 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 00:30:17.847574   54466 command_runner.go:130] > # log_size_max = -1
	I1205 00:30:17.847587   54466 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 00:30:17.847596   54466 command_runner.go:130] > # log_to_journald = false
	I1205 00:30:17.847609   54466 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 00:30:17.847621   54466 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 00:30:17.847641   54466 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 00:30:17.847654   54466 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 00:30:17.847667   54466 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 00:30:17.847678   54466 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 00:30:17.847688   54466 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 00:30:17.847697   54466 command_runner.go:130] > # read_only = false
	I1205 00:30:17.847709   54466 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 00:30:17.847723   54466 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 00:30:17.847743   54466 command_runner.go:130] > # live configuration reload.
	I1205 00:30:17.847753   54466 command_runner.go:130] > # log_level = "info"
	I1205 00:30:17.847764   54466 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 00:30:17.847776   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.847787   54466 command_runner.go:130] > # log_filter = ""
	I1205 00:30:17.847798   54466 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 00:30:17.847812   54466 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 00:30:17.847822   54466 command_runner.go:130] > # separated by comma.
	I1205 00:30:17.847836   54466 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 00:30:17.847847   54466 command_runner.go:130] > # uid_mappings = ""
	I1205 00:30:17.847861   54466 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 00:30:17.847874   54466 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 00:30:17.847885   54466 command_runner.go:130] > # separated by comma.
	I1205 00:30:17.847899   54466 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 00:30:17.847908   54466 command_runner.go:130] > # gid_mappings = ""
	I1205 00:30:17.847919   54466 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 00:30:17.847933   54466 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 00:30:17.847947   54466 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 00:30:17.847963   54466 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 00:30:17.847970   54466 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 00:30:17.848017   54466 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 00:30:17.848030   54466 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 00:30:17.848043   54466 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 00:30:17.848056   54466 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 00:30:17.848066   54466 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 00:30:17.848077   54466 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 00:30:17.848090   54466 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 00:30:17.848103   54466 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 00:30:17.848119   54466 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 00:30:17.848131   54466 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 00:30:17.848144   54466 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 00:30:17.848153   54466 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 00:30:17.848164   54466 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 00:30:17.848184   54466 command_runner.go:130] > drop_infra_ctr = false
	I1205 00:30:17.848199   54466 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 00:30:17.848218   54466 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 00:30:17.848232   54466 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 00:30:17.848239   54466 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 00:30:17.848254   54466 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 00:30:17.848263   54466 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 00:30:17.848273   54466 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 00:30:17.848284   54466 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 00:30:17.848294   54466 command_runner.go:130] > # shared_cpuset = ""
	I1205 00:30:17.848303   54466 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 00:30:17.848315   54466 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 00:30:17.848322   54466 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 00:30:17.848337   54466 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 00:30:17.848348   54466 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 00:30:17.848358   54466 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 00:30:17.848371   54466 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 00:30:17.848382   54466 command_runner.go:130] > # enable_criu_support = false
	I1205 00:30:17.848390   54466 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 00:30:17.848403   54466 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 00:30:17.848421   54466 command_runner.go:130] > # enable_pod_events = false
	I1205 00:30:17.848435   54466 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 00:30:17.848445   54466 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 00:30:17.848457   54466 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 00:30:17.848466   54466 command_runner.go:130] > # default_runtime = "runc"
	I1205 00:30:17.848475   54466 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 00:30:17.848489   54466 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 00:30:17.848507   54466 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 00:30:17.848518   54466 command_runner.go:130] > # creation as a file is not desired either.
	I1205 00:30:17.848537   54466 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 00:30:17.848554   54466 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 00:30:17.848561   54466 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 00:30:17.848567   54466 command_runner.go:130] > # ]
	I1205 00:30:17.848587   54466 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 00:30:17.848598   54466 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 00:30:17.848611   54466 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 00:30:17.848623   54466 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 00:30:17.848632   54466 command_runner.go:130] > #
	I1205 00:30:17.848643   54466 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 00:30:17.848653   54466 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 00:30:17.848683   54466 command_runner.go:130] > # runtime_type = "oci"
	I1205 00:30:17.848694   54466 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 00:30:17.848705   54466 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 00:30:17.848717   54466 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 00:30:17.848725   54466 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 00:30:17.848734   54466 command_runner.go:130] > # monitor_env = []
	I1205 00:30:17.848745   54466 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 00:30:17.848755   54466 command_runner.go:130] > # allowed_annotations = []
	I1205 00:30:17.848767   54466 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 00:30:17.848773   54466 command_runner.go:130] > # Where:
	I1205 00:30:17.848782   54466 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 00:30:17.848796   54466 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 00:30:17.848810   54466 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 00:30:17.848824   54466 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 00:30:17.848834   54466 command_runner.go:130] > #   in $PATH.
	I1205 00:30:17.848846   54466 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 00:30:17.848857   54466 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 00:30:17.848869   54466 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 00:30:17.848877   54466 command_runner.go:130] > #   state.
	I1205 00:30:17.848884   54466 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 00:30:17.848896   54466 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 00:30:17.848910   54466 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 00:30:17.848923   54466 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 00:30:17.848936   54466 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 00:30:17.848949   54466 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 00:30:17.848959   54466 command_runner.go:130] > #   The currently recognized values are:
	I1205 00:30:17.848982   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 00:30:17.848998   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 00:30:17.849014   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 00:30:17.849024   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 00:30:17.849040   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 00:30:17.849055   54466 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 00:30:17.849070   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 00:30:17.849083   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 00:30:17.849092   54466 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 00:30:17.849104   54466 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 00:30:17.849115   54466 command_runner.go:130] > #   deprecated option "conmon".
	I1205 00:30:17.849130   54466 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 00:30:17.849143   54466 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 00:30:17.849158   54466 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 00:30:17.849169   54466 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 00:30:17.849180   54466 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1205 00:30:17.849190   54466 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 00:30:17.849204   54466 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 00:30:17.849217   54466 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 00:30:17.849226   54466 command_runner.go:130] > #
	I1205 00:30:17.849238   54466 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 00:30:17.849246   54466 command_runner.go:130] > #
	I1205 00:30:17.849256   54466 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 00:30:17.849270   54466 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 00:30:17.849279   54466 command_runner.go:130] > #
	I1205 00:30:17.849289   54466 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 00:30:17.849301   54466 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 00:30:17.849309   54466 command_runner.go:130] > #
	I1205 00:30:17.849323   54466 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 00:30:17.849332   54466 command_runner.go:130] > # feature.
	I1205 00:30:17.849340   54466 command_runner.go:130] > #
	I1205 00:30:17.849351   54466 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 00:30:17.849363   54466 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 00:30:17.849376   54466 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 00:30:17.849388   54466 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 00:30:17.849402   54466 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 00:30:17.849411   54466 command_runner.go:130] > #
	I1205 00:30:17.849424   54466 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 00:30:17.849440   54466 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 00:30:17.849449   54466 command_runner.go:130] > #
	I1205 00:30:17.849460   54466 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 00:30:17.849470   54466 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 00:30:17.849478   54466 command_runner.go:130] > #
	I1205 00:30:17.849490   54466 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 00:30:17.849503   54466 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 00:30:17.849513   54466 command_runner.go:130] > # limitation.
	I1205 00:30:17.849523   54466 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 00:30:17.849533   54466 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 00:30:17.849543   54466 command_runner.go:130] > runtime_type = "oci"
	I1205 00:30:17.849553   54466 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 00:30:17.849561   54466 command_runner.go:130] > runtime_config_path = ""
	I1205 00:30:17.849568   54466 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 00:30:17.849578   54466 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 00:30:17.849588   54466 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 00:30:17.849595   54466 command_runner.go:130] > monitor_env = [
	I1205 00:30:17.849609   54466 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 00:30:17.849618   54466 command_runner.go:130] > ]
	I1205 00:30:17.849629   54466 command_runner.go:130] > privileged_without_host_devices = false
	I1205 00:30:17.849642   54466 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 00:30:17.849653   54466 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 00:30:17.849663   54466 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 00:30:17.849679   54466 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 00:30:17.849695   54466 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 00:30:17.849709   54466 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 00:30:17.849726   54466 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 00:30:17.849742   54466 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 00:30:17.849755   54466 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 00:30:17.849765   54466 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 00:30:17.849772   54466 command_runner.go:130] > # Example:
	I1205 00:30:17.849779   54466 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 00:30:17.849788   54466 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 00:30:17.849796   54466 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 00:30:17.849804   54466 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 00:30:17.849809   54466 command_runner.go:130] > # cpuset = 0
	I1205 00:30:17.849814   54466 command_runner.go:130] > # cpushares = "0-1"
	I1205 00:30:17.849820   54466 command_runner.go:130] > # Where:
	I1205 00:30:17.849831   54466 command_runner.go:130] > # The workload name is workload-type.
	I1205 00:30:17.849841   54466 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 00:30:17.849847   54466 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 00:30:17.849853   54466 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 00:30:17.849868   54466 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 00:30:17.849877   54466 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 00:30:17.849885   54466 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 00:30:17.849896   54466 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 00:30:17.849903   54466 command_runner.go:130] > # Default value is set to true
	I1205 00:30:17.849911   54466 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 00:30:17.849920   54466 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 00:30:17.849927   54466 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 00:30:17.849933   54466 command_runner.go:130] > # Default value is set to 'false'
	I1205 00:30:17.849937   54466 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 00:30:17.849965   54466 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 00:30:17.849980   54466 command_runner.go:130] > #
	I1205 00:30:17.849993   54466 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 00:30:17.850006   54466 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 00:30:17.850019   54466 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 00:30:17.850028   54466 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 00:30:17.850042   54466 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 00:30:17.850052   54466 command_runner.go:130] > [crio.image]
	I1205 00:30:17.850062   54466 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 00:30:17.850073   54466 command_runner.go:130] > # default_transport = "docker://"
	I1205 00:30:17.850086   54466 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 00:30:17.850099   54466 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 00:30:17.850109   54466 command_runner.go:130] > # global_auth_file = ""
	I1205 00:30:17.850120   54466 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 00:30:17.850129   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.850136   54466 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1205 00:30:17.850151   54466 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 00:30:17.850165   54466 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 00:30:17.850177   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.850188   54466 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 00:30:17.850200   54466 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 00:30:17.850212   54466 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 00:30:17.850228   54466 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 00:30:17.850237   54466 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 00:30:17.850247   54466 command_runner.go:130] > # pause_command = "/pause"
	I1205 00:30:17.850261   54466 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 00:30:17.850274   54466 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 00:30:17.850287   54466 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 00:30:17.850300   54466 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 00:30:17.850311   54466 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 00:30:17.850325   54466 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 00:30:17.850334   54466 command_runner.go:130] > # pinned_images = [
	I1205 00:30:17.850341   54466 command_runner.go:130] > # ]
	I1205 00:30:17.850355   54466 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 00:30:17.850365   54466 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 00:30:17.850379   54466 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 00:30:17.850393   54466 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 00:30:17.850405   54466 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 00:30:17.850412   54466 command_runner.go:130] > # signature_policy = ""
	I1205 00:30:17.850424   54466 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 00:30:17.850435   54466 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 00:30:17.850444   54466 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 00:30:17.850456   54466 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 00:30:17.850465   54466 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 00:30:17.850472   54466 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 00:30:17.850481   54466 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 00:30:17.850491   54466 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 00:30:17.850498   54466 command_runner.go:130] > # changing them here.
	I1205 00:30:17.850505   54466 command_runner.go:130] > # insecure_registries = [
	I1205 00:30:17.850512   54466 command_runner.go:130] > # ]
	I1205 00:30:17.850525   54466 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 00:30:17.850538   54466 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 00:30:17.850546   54466 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 00:30:17.850557   54466 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 00:30:17.850564   54466 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 00:30:17.850570   54466 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 00:30:17.850577   54466 command_runner.go:130] > # CNI plugins.
	I1205 00:30:17.850580   54466 command_runner.go:130] > [crio.network]
	I1205 00:30:17.850589   54466 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 00:30:17.850597   54466 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 00:30:17.850604   54466 command_runner.go:130] > # cni_default_network = ""
	I1205 00:30:17.850611   54466 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 00:30:17.850618   54466 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 00:30:17.850626   54466 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 00:30:17.850632   54466 command_runner.go:130] > # plugin_dirs = [
	I1205 00:30:17.850636   54466 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 00:30:17.850642   54466 command_runner.go:130] > # ]
	I1205 00:30:17.850648   54466 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 00:30:17.850654   54466 command_runner.go:130] > [crio.metrics]
	I1205 00:30:17.850660   54466 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 00:30:17.850666   54466 command_runner.go:130] > enable_metrics = true
	I1205 00:30:17.850670   54466 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 00:30:17.850677   54466 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 00:30:17.850683   54466 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 00:30:17.850691   54466 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 00:30:17.850701   54466 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 00:30:17.850708   54466 command_runner.go:130] > # metrics_collectors = [
	I1205 00:30:17.850712   54466 command_runner.go:130] > # 	"operations",
	I1205 00:30:17.850720   54466 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 00:30:17.850724   54466 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 00:30:17.850731   54466 command_runner.go:130] > # 	"operations_errors",
	I1205 00:30:17.850735   54466 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 00:30:17.850742   54466 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 00:30:17.850746   54466 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 00:30:17.850752   54466 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 00:30:17.850756   54466 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 00:30:17.850763   54466 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 00:30:17.850767   54466 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 00:30:17.850773   54466 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 00:30:17.850778   54466 command_runner.go:130] > # 	"containers_oom_total",
	I1205 00:30:17.850784   54466 command_runner.go:130] > # 	"containers_oom",
	I1205 00:30:17.850788   54466 command_runner.go:130] > # 	"processes_defunct",
	I1205 00:30:17.850794   54466 command_runner.go:130] > # 	"operations_total",
	I1205 00:30:17.850799   54466 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 00:30:17.850805   54466 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 00:30:17.850809   54466 command_runner.go:130] > # 	"operations_errors_total",
	I1205 00:30:17.850816   54466 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 00:30:17.850820   54466 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 00:30:17.850827   54466 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 00:30:17.850831   54466 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 00:30:17.850835   54466 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 00:30:17.850841   54466 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 00:30:17.850848   54466 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 00:30:17.850865   54466 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 00:30:17.850877   54466 command_runner.go:130] > # ]
	I1205 00:30:17.850885   54466 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 00:30:17.850890   54466 command_runner.go:130] > # metrics_port = 9090
	I1205 00:30:17.850897   54466 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 00:30:17.850903   54466 command_runner.go:130] > # metrics_socket = ""
	I1205 00:30:17.850911   54466 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 00:30:17.850917   54466 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 00:30:17.850925   54466 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 00:30:17.850930   54466 command_runner.go:130] > # certificate on any modification event.
	I1205 00:30:17.850936   54466 command_runner.go:130] > # metrics_cert = ""
	I1205 00:30:17.850941   54466 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 00:30:17.850948   54466 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 00:30:17.850952   54466 command_runner.go:130] > # metrics_key = ""
	I1205 00:30:17.850961   54466 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 00:30:17.850965   54466 command_runner.go:130] > [crio.tracing]
	I1205 00:30:17.850976   54466 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 00:30:17.850982   54466 command_runner.go:130] > # enable_tracing = false
	I1205 00:30:17.851003   54466 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 00:30:17.851011   54466 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 00:30:17.851018   54466 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 00:30:17.851022   54466 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 00:30:17.851029   54466 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 00:30:17.851033   54466 command_runner.go:130] > [crio.nri]
	I1205 00:30:17.851040   54466 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 00:30:17.851044   54466 command_runner.go:130] > # enable_nri = false
	I1205 00:30:17.851049   54466 command_runner.go:130] > # NRI socket to listen on.
	I1205 00:30:17.851053   54466 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 00:30:17.851057   54466 command_runner.go:130] > # NRI plugin directory to use.
	I1205 00:30:17.851062   54466 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 00:30:17.851066   54466 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 00:30:17.851070   54466 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 00:30:17.851075   54466 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 00:30:17.851079   54466 command_runner.go:130] > # nri_disable_connections = false
	I1205 00:30:17.851084   54466 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 00:30:17.851091   54466 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 00:30:17.851097   54466 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 00:30:17.851103   54466 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 00:30:17.851111   54466 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 00:30:17.851115   54466 command_runner.go:130] > [crio.stats]
	I1205 00:30:17.851122   54466 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 00:30:17.851128   54466 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 00:30:17.851134   54466 command_runner.go:130] > # stats_collection_period = 0
	I1205 00:30:17.851166   54466 command_runner.go:130] ! time="2024-12-05 00:30:17.814450691Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1205 00:30:17.851183   54466 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 00:30:17.851298   54466 cni.go:84] Creating CNI manager for ""
	I1205 00:30:17.851309   54466 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 00:30:17.851316   54466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 00:30:17.851335   54466 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-591074 NodeName:multinode-591074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 00:30:17.851461   54466 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-591074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 00:30:17.851522   54466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 00:30:17.861550   54466 command_runner.go:130] > kubeadm
	I1205 00:30:17.861572   54466 command_runner.go:130] > kubectl
	I1205 00:30:17.861576   54466 command_runner.go:130] > kubelet
	I1205 00:30:17.861661   54466 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 00:30:17.861734   54466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 00:30:17.871681   54466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 00:30:17.888302   54466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 00:30:17.904527   54466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1205 00:30:17.920755   54466 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I1205 00:30:17.924748   54466 command_runner.go:130] > 192.168.39.176	control-plane.minikube.internal
	I1205 00:30:17.924819   54466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:30:18.062202   54466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:30:18.077511   54466 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074 for IP: 192.168.39.176
	I1205 00:30:18.077539   54466 certs.go:194] generating shared ca certs ...
	I1205 00:30:18.077559   54466 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:30:18.077724   54466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1205 00:30:18.077775   54466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1205 00:30:18.077789   54466 certs.go:256] generating profile certs ...
	I1205 00:30:18.077891   54466 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/client.key
	I1205 00:30:18.077988   54466 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.key.738cda39
	I1205 00:30:18.078046   54466 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.key
	I1205 00:30:18.078059   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 00:30:18.078071   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 00:30:18.078083   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 00:30:18.078093   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 00:30:18.078104   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 00:30:18.078114   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 00:30:18.078126   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 00:30:18.078138   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 00:30:18.078184   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1205 00:30:18.078212   54466 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1205 00:30:18.078222   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 00:30:18.078261   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1205 00:30:18.078286   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1205 00:30:18.078310   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1205 00:30:18.078346   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:30:18.078372   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.078386   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.078401   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.079033   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 00:30:18.101530   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 00:30:18.124472   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 00:30:18.146972   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 00:30:18.170126   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 00:30:18.192726   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 00:30:18.214601   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 00:30:18.236515   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 00:30:18.258658   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 00:30:18.281138   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1205 00:30:18.302796   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1205 00:30:18.324650   54466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 00:30:18.340314   54466 ssh_runner.go:195] Run: openssl version
	I1205 00:30:18.346073   54466 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1205 00:30:18.346172   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1205 00:30:18.356478   54466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.360515   54466 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.360622   54466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.360680   54466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.366047   54466 command_runner.go:130] > 51391683
	I1205 00:30:18.366120   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1205 00:30:18.375638   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1205 00:30:18.385794   54466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.389747   54466 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.389906   54466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.389957   54466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.395113   54466 command_runner.go:130] > 3ec20f2e
	I1205 00:30:18.395184   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 00:30:18.404106   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 00:30:18.414224   54466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.418224   54466 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.418447   54466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.418494   54466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.423607   54466 command_runner.go:130] > b5213941
	I1205 00:30:18.423805   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 00:30:18.433456   54466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:30:18.437701   54466 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:30:18.437727   54466 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 00:30:18.437733   54466 command_runner.go:130] > Device: 253,1	Inode: 8385582     Links: 1
	I1205 00:30:18.437739   54466 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 00:30:18.437745   54466 command_runner.go:130] > Access: 2024-12-05 00:23:28.167861905 +0000
	I1205 00:30:18.437750   54466 command_runner.go:130] > Modify: 2024-12-05 00:23:28.167861905 +0000
	I1205 00:30:18.437757   54466 command_runner.go:130] > Change: 2024-12-05 00:23:28.167861905 +0000
	I1205 00:30:18.437764   54466 command_runner.go:130] >  Birth: 2024-12-05 00:23:28.167861905 +0000
	I1205 00:30:18.437820   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 00:30:18.443176   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.443253   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 00:30:18.448416   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.448573   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 00:30:18.453672   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.453876   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 00:30:18.459259   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.459344   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 00:30:18.464585   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.464650   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 00:30:18.469875   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.469963   54466 kubeadm.go:392] StartCluster: {Name:multinode-591074 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-591074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:30:18.470086   54466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 00:30:18.470138   54466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:30:18.541088   54466 command_runner.go:130] > 11bddc006567c030c5056b59664a40db900aebbb6d98e5782c4b942e18c1624d
	I1205 00:30:18.541121   54466 command_runner.go:130] > f38c0e934e2c97ceefdf38fc780932bd818678a2301a31195a27b0d132b7ff46
	I1205 00:30:18.541129   54466 command_runner.go:130] > dfec96603c687760fb699c4bc5ff8a14e85642a6128d6caae3f470cb125d4116
	I1205 00:30:18.541138   54466 command_runner.go:130] > a1906ff20d3426137e839ca45d6f42b4dcb7e4bd1f39080d967333ceccc3302d
	I1205 00:30:18.541147   54466 command_runner.go:130] > 384cbec2fc8e9d2a44bc883049f95d0c93d52bce3c682d0af92c07dc8fb19f58
	I1205 00:30:18.541155   54466 command_runner.go:130] > 1a11d940106d859696de5b98854555726c5002a9b30b82f879cc6b6ba76bf879
	I1205 00:30:18.541163   54466 command_runner.go:130] > cc88ace22ccdb015b766b0ff94f86b4f1454d86644677834fbafe424b4bf3bb6
	I1205 00:30:18.541177   54466 command_runner.go:130] > 9126a5b156206e62a1f6ba394b3da766590d9772a20d32046b01ad201273b7bc
	I1205 00:30:18.541206   54466 cri.go:89] found id: "11bddc006567c030c5056b59664a40db900aebbb6d98e5782c4b942e18c1624d"
	I1205 00:30:18.541219   54466 cri.go:89] found id: "f38c0e934e2c97ceefdf38fc780932bd818678a2301a31195a27b0d132b7ff46"
	I1205 00:30:18.541224   54466 cri.go:89] found id: "dfec96603c687760fb699c4bc5ff8a14e85642a6128d6caae3f470cb125d4116"
	I1205 00:30:18.541229   54466 cri.go:89] found id: "a1906ff20d3426137e839ca45d6f42b4dcb7e4bd1f39080d967333ceccc3302d"
	I1205 00:30:18.541233   54466 cri.go:89] found id: "384cbec2fc8e9d2a44bc883049f95d0c93d52bce3c682d0af92c07dc8fb19f58"
	I1205 00:30:18.541240   54466 cri.go:89] found id: "1a11d940106d859696de5b98854555726c5002a9b30b82f879cc6b6ba76bf879"
	I1205 00:30:18.541244   54466 cri.go:89] found id: "cc88ace22ccdb015b766b0ff94f86b4f1454d86644677834fbafe424b4bf3bb6"
	I1205 00:30:18.541248   54466 cri.go:89] found id: "9126a5b156206e62a1f6ba394b3da766590d9772a20d32046b01ad201273b7bc"
	I1205 00:30:18.541255   54466 cri.go:89] found id: ""
	I1205 00:30:18.541308   54466 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 00:30:18.593717   54466 command_runner.go:130] ! load container cb51b07c586cd0d31246296e4528a6fe36a997b52d690de97104caeffcdca8aa: container does not exist

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-591074 -n multinode-591074
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-591074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (332.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 stop
E1205 00:33:10.087277   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-591074 stop: exit status 82 (2m0.46393348s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-591074-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-591074 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-591074 status: (18.767625652s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr: (3.360475723s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-591074 -n multinode-591074
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-591074 logs -n 25: (2.006857229s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m02:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074:/home/docker/cp-test_multinode-591074-m02_multinode-591074.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074 sudo cat                                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m02_multinode-591074.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m02:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03:/home/docker/cp-test_multinode-591074-m02_multinode-591074-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074-m03 sudo cat                                   | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m02_multinode-591074-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp testdata/cp-test.txt                                                | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile49849138/001/cp-test_multinode-591074-m03.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074:/home/docker/cp-test_multinode-591074-m03_multinode-591074.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074 sudo cat                                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m03_multinode-591074.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt                       | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02:/home/docker/cp-test_multinode-591074-m03_multinode-591074-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074-m02 sudo cat                                   | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m03_multinode-591074-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-591074 node stop m03                                                          | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	| node    | multinode-591074 node start                                                             | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:26 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-591074                                                                | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:26 UTC |                     |
	| stop    | -p multinode-591074                                                                     | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:26 UTC |                     |
	| start   | -p multinode-591074                                                                     | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:28 UTC | 05 Dec 24 00:32 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-591074                                                                | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:32 UTC |                     |
	| node    | multinode-591074 node delete                                                            | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:32 UTC | 05 Dec 24 00:32 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-591074 stop                                                                   | multinode-591074 | jenkins | v1.34.0 | 05 Dec 24 00:32 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 00:28:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 00:28:40.047829   54466 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:28:40.047942   54466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:28:40.047951   54466 out.go:358] Setting ErrFile to fd 2...
	I1205 00:28:40.047955   54466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:28:40.048133   54466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:28:40.048677   54466 out.go:352] Setting JSON to false
	I1205 00:28:40.049588   54466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7866,"bootTime":1733350654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 00:28:40.049680   54466 start.go:139] virtualization: kvm guest
	I1205 00:28:40.051889   54466 out.go:177] * [multinode-591074] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 00:28:40.053269   54466 notify.go:220] Checking for updates...
	I1205 00:28:40.053281   54466 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:28:40.054709   54466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:28:40.056153   54466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:28:40.057461   54466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:28:40.058831   54466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 00:28:40.060505   54466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:28:40.062303   54466 config.go:182] Loaded profile config "multinode-591074": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:28:40.062388   54466 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:28:40.062821   54466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:28:40.062859   54466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:28:40.078139   54466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I1205 00:28:40.078659   54466 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:28:40.079144   54466 main.go:141] libmachine: Using API Version  1
	I1205 00:28:40.079165   54466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:28:40.079484   54466 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:28:40.079651   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:28:40.115623   54466 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 00:28:40.116695   54466 start.go:297] selected driver: kvm2
	I1205 00:28:40.116706   54466 start.go:901] validating driver "kvm2" against &{Name:multinode-591074 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-591074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:28:40.116871   54466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:28:40.117204   54466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:28:40.117276   54466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 00:28:40.132090   54466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 00:28:40.132762   54466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 00:28:40.132794   54466 cni.go:84] Creating CNI manager for ""
	I1205 00:28:40.132843   54466 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 00:28:40.132899   54466 start.go:340] cluster config:
	{Name:multinode-591074 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-591074 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:28:40.133042   54466 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:28:40.134569   54466 out.go:177] * Starting "multinode-591074" primary control-plane node in "multinode-591074" cluster
	I1205 00:28:40.135767   54466 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:28:40.135802   54466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 00:28:40.135817   54466 cache.go:56] Caching tarball of preloaded images
	I1205 00:28:40.135876   54466 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 00:28:40.135888   54466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 00:28:40.136015   54466 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/config.json ...
	I1205 00:28:40.136195   54466 start.go:360] acquireMachinesLock for multinode-591074: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 00:28:40.136238   54466 start.go:364] duration metric: took 25.703µs to acquireMachinesLock for "multinode-591074"
	I1205 00:28:40.136256   54466 start.go:96] Skipping create...Using existing machine configuration
	I1205 00:28:40.136265   54466 fix.go:54] fixHost starting: 
	I1205 00:28:40.136526   54466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:28:40.136563   54466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:28:40.150701   54466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I1205 00:28:40.151089   54466 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:28:40.151532   54466 main.go:141] libmachine: Using API Version  1
	I1205 00:28:40.151554   54466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:28:40.151829   54466 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:28:40.152063   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:28:40.152217   54466 main.go:141] libmachine: (multinode-591074) Calling .GetState
	I1205 00:28:40.153911   54466 fix.go:112] recreateIfNeeded on multinode-591074: state=Running err=<nil>
	W1205 00:28:40.153963   54466 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 00:28:40.155692   54466 out.go:177] * Updating the running kvm2 "multinode-591074" VM ...
	I1205 00:28:40.156729   54466 machine.go:93] provisionDockerMachine start ...
	I1205 00:28:40.156747   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:28:40.156938   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.159424   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.159882   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.159913   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.160082   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.160266   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.160399   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.160527   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.160710   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:28:40.160927   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:28:40.160939   54466 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 00:28:40.262765   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-591074
	
	I1205 00:28:40.262790   54466 main.go:141] libmachine: (multinode-591074) Calling .GetMachineName
	I1205 00:28:40.263054   54466 buildroot.go:166] provisioning hostname "multinode-591074"
	I1205 00:28:40.263082   54466 main.go:141] libmachine: (multinode-591074) Calling .GetMachineName
	I1205 00:28:40.263266   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.266286   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.266644   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.266699   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.266845   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.266997   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.267120   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.267231   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.267432   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:28:40.267601   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:28:40.267612   54466 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-591074 && echo "multinode-591074" | sudo tee /etc/hostname
	I1205 00:28:40.381638   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-591074
	
	I1205 00:28:40.381672   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.384687   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.385127   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.385167   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.385330   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.385531   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.385718   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.385867   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.386056   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:28:40.386214   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:28:40.386231   54466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-591074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-591074/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-591074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 00:28:40.486759   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 00:28:40.486787   54466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1205 00:28:40.486820   54466 buildroot.go:174] setting up certificates
	I1205 00:28:40.486831   54466 provision.go:84] configureAuth start
	I1205 00:28:40.486845   54466 main.go:141] libmachine: (multinode-591074) Calling .GetMachineName
	I1205 00:28:40.487120   54466 main.go:141] libmachine: (multinode-591074) Calling .GetIP
	I1205 00:28:40.489787   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.490303   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.490332   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.490516   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.492706   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.493005   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.493046   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.493147   54466 provision.go:143] copyHostCerts
	I1205 00:28:40.493169   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:28:40.493208   54466 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1205 00:28:40.493220   54466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:28:40.493314   54466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1205 00:28:40.493420   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:28:40.493439   54466 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1205 00:28:40.493446   54466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:28:40.493478   54466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1205 00:28:40.493534   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:28:40.493551   54466 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1205 00:28:40.493557   54466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:28:40.493578   54466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1205 00:28:40.493636   54466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.multinode-591074 san=[127.0.0.1 192.168.39.176 localhost minikube multinode-591074]
	I1205 00:28:40.681260   54466 provision.go:177] copyRemoteCerts
	I1205 00:28:40.681326   54466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 00:28:40.681348   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.684000   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.684376   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.684418   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.684572   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.684756   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.684904   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.685051   54466 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:28:40.764035   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 00:28:40.764110   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 00:28:40.793860   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 00:28:40.793921   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 00:28:40.816708   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 00:28:40.816794   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 00:28:40.839102   54466 provision.go:87] duration metric: took 352.256196ms to configureAuth
	I1205 00:28:40.839131   54466 buildroot.go:189] setting minikube options for container-runtime
	I1205 00:28:40.839391   54466 config.go:182] Loaded profile config "multinode-591074": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:28:40.839461   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:28:40.842269   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.842648   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:28:40.842673   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:28:40.842833   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:28:40.843023   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.843186   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:28:40.843344   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:28:40.843522   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:28:40.843698   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:28:40.843718   54466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 00:30:11.601992   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 00:30:11.602021   54466 machine.go:96] duration metric: took 1m31.4452786s to provisionDockerMachine
	I1205 00:30:11.602032   54466 start.go:293] postStartSetup for "multinode-591074" (driver="kvm2")
	I1205 00:30:11.602043   54466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 00:30:11.602058   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.602356   54466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 00:30:11.602378   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:30:11.605202   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.605702   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.605726   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.605840   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:30:11.606020   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.606175   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:30:11.606345   54466 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:30:11.684627   54466 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 00:30:11.688621   54466 command_runner.go:130] > NAME=Buildroot
	I1205 00:30:11.688644   54466 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1205 00:30:11.688651   54466 command_runner.go:130] > ID=buildroot
	I1205 00:30:11.688658   54466 command_runner.go:130] > VERSION_ID=2023.02.9
	I1205 00:30:11.688664   54466 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1205 00:30:11.688707   54466 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 00:30:11.688724   54466 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1205 00:30:11.688782   54466 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1205 00:30:11.688876   54466 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1205 00:30:11.688886   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /etc/ssl/certs/213852.pem
	I1205 00:30:11.689004   54466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 00:30:11.697889   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:30:11.720349   54466 start.go:296] duration metric: took 118.303665ms for postStartSetup
	I1205 00:30:11.720402   54466 fix.go:56] duration metric: took 1m31.58412026s for fixHost
	I1205 00:30:11.720426   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:30:11.723034   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.723545   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.723585   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.723761   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:30:11.723967   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.724100   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.724245   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:30:11.724423   54466 main.go:141] libmachine: Using SSH client type: native
	I1205 00:30:11.724637   54466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1205 00:30:11.724652   54466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 00:30:11.826710   54466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733358611.808506478
	
	I1205 00:30:11.826733   54466 fix.go:216] guest clock: 1733358611.808506478
	I1205 00:30:11.826740   54466 fix.go:229] Guest: 2024-12-05 00:30:11.808506478 +0000 UTC Remote: 2024-12-05 00:30:11.720407604 +0000 UTC m=+91.710154212 (delta=88.098874ms)
	I1205 00:30:11.826771   54466 fix.go:200] guest clock delta is within tolerance: 88.098874ms
	I1205 00:30:11.826776   54466 start.go:83] releasing machines lock for "multinode-591074", held for 1m31.690526862s
	I1205 00:30:11.826793   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.827027   54466 main.go:141] libmachine: (multinode-591074) Calling .GetIP
	I1205 00:30:11.829627   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.829982   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.830008   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.830138   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.830628   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.830771   54466 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:30:11.830882   54466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 00:30:11.830919   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:30:11.830990   54466 ssh_runner.go:195] Run: cat /version.json
	I1205 00:30:11.831007   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:30:11.833438   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.833734   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.833860   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.833888   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.833999   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:30:11.834180   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.834184   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:11.834212   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:11.834341   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:30:11.834360   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:30:11.834492   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:30:11.834540   54466 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:30:11.834610   54466 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:30:11.834716   54466 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:30:11.914279   54466 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1205 00:30:11.915087   54466 ssh_runner.go:195] Run: systemctl --version
	I1205 00:30:11.951559   54466 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 00:30:11.951617   54466 command_runner.go:130] > systemd 252 (252)
	I1205 00:30:11.951648   54466 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1205 00:30:11.951708   54466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 00:30:12.116125   54466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 00:30:12.126607   54466 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 00:30:12.126956   54466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 00:30:12.127051   54466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 00:30:12.136496   54466 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 00:30:12.136517   54466 start.go:495] detecting cgroup driver to use...
	I1205 00:30:12.136579   54466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 00:30:12.152523   54466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 00:30:12.165813   54466 docker.go:217] disabling cri-docker service (if available) ...
	I1205 00:30:12.165873   54466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 00:30:12.178540   54466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 00:30:12.191038   54466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 00:30:12.326354   54466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 00:30:12.465084   54466 docker.go:233] disabling docker service ...
	I1205 00:30:12.465164   54466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 00:30:12.497659   54466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 00:30:12.511275   54466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 00:30:12.659401   54466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 00:30:12.795942   54466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 00:30:12.809840   54466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 00:30:12.828727   54466 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 00:30:12.828779   54466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 00:30:12.828831   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.839340   54466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 00:30:12.839414   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.849222   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.858948   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.868760   54466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 00:30:12.878540   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.888153   54466 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.898510   54466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:30:12.908228   54466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 00:30:12.916992   54466 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 00:30:12.917073   54466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 00:30:12.925494   54466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:30:13.055416   54466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 00:30:17.598366   54466 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.542915423s)
	I1205 00:30:17.598398   54466 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 00:30:17.598441   54466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 00:30:17.603253   54466 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 00:30:17.603283   54466 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 00:30:17.603295   54466 command_runner.go:130] > Device: 0,22	Inode: 1289        Links: 1
	I1205 00:30:17.603306   54466 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 00:30:17.603320   54466 command_runner.go:130] > Access: 2024-12-05 00:30:17.477989472 +0000
	I1205 00:30:17.603329   54466 command_runner.go:130] > Modify: 2024-12-05 00:30:17.477989472 +0000
	I1205 00:30:17.603334   54466 command_runner.go:130] > Change: 2024-12-05 00:30:17.477989472 +0000
	I1205 00:30:17.603338   54466 command_runner.go:130] >  Birth: -
	I1205 00:30:17.603361   54466 start.go:563] Will wait 60s for crictl version
	I1205 00:30:17.603444   54466 ssh_runner.go:195] Run: which crictl
	I1205 00:30:17.607117   54466 command_runner.go:130] > /usr/bin/crictl
	I1205 00:30:17.607218   54466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 00:30:17.641935   54466 command_runner.go:130] > Version:  0.1.0
	I1205 00:30:17.642026   54466 command_runner.go:130] > RuntimeName:  cri-o
	I1205 00:30:17.642034   54466 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1205 00:30:17.642042   54466 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 00:30:17.643306   54466 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 00:30:17.643379   54466 ssh_runner.go:195] Run: crio --version
	I1205 00:30:17.671316   54466 command_runner.go:130] > crio version 1.29.1
	I1205 00:30:17.671347   54466 command_runner.go:130] > Version:        1.29.1
	I1205 00:30:17.671354   54466 command_runner.go:130] > GitCommit:      unknown
	I1205 00:30:17.671358   54466 command_runner.go:130] > GitCommitDate:  unknown
	I1205 00:30:17.671362   54466 command_runner.go:130] > GitTreeState:   clean
	I1205 00:30:17.671368   54466 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 00:30:17.671372   54466 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 00:30:17.671376   54466 command_runner.go:130] > Compiler:       gc
	I1205 00:30:17.671380   54466 command_runner.go:130] > Platform:       linux/amd64
	I1205 00:30:17.671384   54466 command_runner.go:130] > Linkmode:       dynamic
	I1205 00:30:17.671389   54466 command_runner.go:130] > BuildTags:      
	I1205 00:30:17.671393   54466 command_runner.go:130] >   containers_image_ostree_stub
	I1205 00:30:17.671397   54466 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 00:30:17.671401   54466 command_runner.go:130] >   btrfs_noversion
	I1205 00:30:17.671406   54466 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 00:30:17.671411   54466 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 00:30:17.671415   54466 command_runner.go:130] >   seccomp
	I1205 00:30:17.671419   54466 command_runner.go:130] > LDFlags:          unknown
	I1205 00:30:17.671424   54466 command_runner.go:130] > SeccompEnabled:   true
	I1205 00:30:17.671428   54466 command_runner.go:130] > AppArmorEnabled:  false
	I1205 00:30:17.671523   54466 ssh_runner.go:195] Run: crio --version
	I1205 00:30:17.698287   54466 command_runner.go:130] > crio version 1.29.1
	I1205 00:30:17.698311   54466 command_runner.go:130] > Version:        1.29.1
	I1205 00:30:17.698318   54466 command_runner.go:130] > GitCommit:      unknown
	I1205 00:30:17.698323   54466 command_runner.go:130] > GitCommitDate:  unknown
	I1205 00:30:17.698326   54466 command_runner.go:130] > GitTreeState:   clean
	I1205 00:30:17.698333   54466 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 00:30:17.698337   54466 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 00:30:17.698340   54466 command_runner.go:130] > Compiler:       gc
	I1205 00:30:17.698345   54466 command_runner.go:130] > Platform:       linux/amd64
	I1205 00:30:17.698349   54466 command_runner.go:130] > Linkmode:       dynamic
	I1205 00:30:17.698353   54466 command_runner.go:130] > BuildTags:      
	I1205 00:30:17.698357   54466 command_runner.go:130] >   containers_image_ostree_stub
	I1205 00:30:17.698361   54466 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 00:30:17.698365   54466 command_runner.go:130] >   btrfs_noversion
	I1205 00:30:17.698369   54466 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 00:30:17.698373   54466 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 00:30:17.698376   54466 command_runner.go:130] >   seccomp
	I1205 00:30:17.698380   54466 command_runner.go:130] > LDFlags:          unknown
	I1205 00:30:17.698384   54466 command_runner.go:130] > SeccompEnabled:   true
	I1205 00:30:17.698388   54466 command_runner.go:130] > AppArmorEnabled:  false
	I1205 00:30:17.701640   54466 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 00:30:17.703249   54466 main.go:141] libmachine: (multinode-591074) Calling .GetIP
	I1205 00:30:17.706043   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:17.706401   54466 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:30:17.706433   54466 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:30:17.706693   54466 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 00:30:17.710810   54466 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1205 00:30:17.710889   54466 kubeadm.go:883] updating cluster {Name:multinode-591074 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-591074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 00:30:17.711003   54466 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:30:17.711047   54466 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:30:17.760061   54466 command_runner.go:130] > {
	I1205 00:30:17.760089   54466 command_runner.go:130] >   "images": [
	I1205 00:30:17.760094   54466 command_runner.go:130] >     {
	I1205 00:30:17.760101   54466 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 00:30:17.760106   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760112   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 00:30:17.760116   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760119   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760132   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 00:30:17.760144   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 00:30:17.760151   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760158   54466 command_runner.go:130] >       "size": "94965812",
	I1205 00:30:17.760174   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760188   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760195   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760199   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760203   54466 command_runner.go:130] >     },
	I1205 00:30:17.760207   54466 command_runner.go:130] >     {
	I1205 00:30:17.760212   54466 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 00:30:17.760218   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760226   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 00:30:17.760233   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760240   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760252   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 00:30:17.760267   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 00:30:17.760273   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760281   54466 command_runner.go:130] >       "size": "94958644",
	I1205 00:30:17.760287   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760298   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760304   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760310   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760317   54466 command_runner.go:130] >     },
	I1205 00:30:17.760323   54466 command_runner.go:130] >     {
	I1205 00:30:17.760336   54466 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 00:30:17.760344   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760352   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 00:30:17.760359   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760365   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760376   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 00:30:17.760388   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 00:30:17.760392   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760401   54466 command_runner.go:130] >       "size": "1363676",
	I1205 00:30:17.760411   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760418   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760427   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760437   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760446   54466 command_runner.go:130] >     },
	I1205 00:30:17.760454   54466 command_runner.go:130] >     {
	I1205 00:30:17.760463   54466 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 00:30:17.760473   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760478   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 00:30:17.760483   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760494   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760507   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 00:30:17.760525   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 00:30:17.760535   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760541   54466 command_runner.go:130] >       "size": "31470524",
	I1205 00:30:17.760550   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760559   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760563   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760568   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760577   54466 command_runner.go:130] >     },
	I1205 00:30:17.760584   54466 command_runner.go:130] >     {
	I1205 00:30:17.760597   54466 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 00:30:17.760608   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760616   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 00:30:17.760625   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760632   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760644   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 00:30:17.760652   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 00:30:17.760657   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760664   54466 command_runner.go:130] >       "size": "63273227",
	I1205 00:30:17.760671   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.760678   54466 command_runner.go:130] >       "username": "nonroot",
	I1205 00:30:17.760701   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760709   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760715   54466 command_runner.go:130] >     },
	I1205 00:30:17.760722   54466 command_runner.go:130] >     {
	I1205 00:30:17.760732   54466 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 00:30:17.760738   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760754   54466 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 00:30:17.760762   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760769   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760783   54466 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 00:30:17.760798   54466 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 00:30:17.760812   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760821   54466 command_runner.go:130] >       "size": "149009664",
	I1205 00:30:17.760826   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.760835   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.760842   54466 command_runner.go:130] >       },
	I1205 00:30:17.760852   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.760864   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.760874   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.760880   54466 command_runner.go:130] >     },
	I1205 00:30:17.760886   54466 command_runner.go:130] >     {
	I1205 00:30:17.760910   54466 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 00:30:17.760922   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.760930   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 00:30:17.760940   54466 command_runner.go:130] >       ],
	I1205 00:30:17.760946   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.760965   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 00:30:17.760980   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 00:30:17.760993   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761000   54466 command_runner.go:130] >       "size": "95274464",
	I1205 00:30:17.761005   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.761012   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.761018   54466 command_runner.go:130] >       },
	I1205 00:30:17.761028   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761038   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761045   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.761053   54466 command_runner.go:130] >     },
	I1205 00:30:17.761058   54466 command_runner.go:130] >     {
	I1205 00:30:17.761072   54466 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 00:30:17.761078   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.761087   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 00:30:17.761092   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761103   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.761126   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 00:30:17.761142   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 00:30:17.761148   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761155   54466 command_runner.go:130] >       "size": "89474374",
	I1205 00:30:17.761161   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.761166   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.761170   54466 command_runner.go:130] >       },
	I1205 00:30:17.761174   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761180   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761188   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.761193   54466 command_runner.go:130] >     },
	I1205 00:30:17.761199   54466 command_runner.go:130] >     {
	I1205 00:30:17.761208   54466 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 00:30:17.761214   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.761222   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 00:30:17.761227   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761237   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.761247   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 00:30:17.761256   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 00:30:17.761259   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761265   54466 command_runner.go:130] >       "size": "92783513",
	I1205 00:30:17.761271   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.761277   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761284   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761290   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.761295   54466 command_runner.go:130] >     },
	I1205 00:30:17.761301   54466 command_runner.go:130] >     {
	I1205 00:30:17.761310   54466 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 00:30:17.761318   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.761326   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 00:30:17.761335   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761341   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.761354   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 00:30:17.761369   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 00:30:17.761378   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761385   54466 command_runner.go:130] >       "size": "68457798",
	I1205 00:30:17.761394   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.761401   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.761410   54466 command_runner.go:130] >       },
	I1205 00:30:17.761416   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761425   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761430   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.761436   54466 command_runner.go:130] >     },
	I1205 00:30:17.761442   54466 command_runner.go:130] >     {
	I1205 00:30:17.761456   54466 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 00:30:17.761465   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.761473   54466 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 00:30:17.761481   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761488   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.761502   54466 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 00:30:17.761519   54466 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 00:30:17.761528   54466 command_runner.go:130] >       ],
	I1205 00:30:17.761535   54466 command_runner.go:130] >       "size": "742080",
	I1205 00:30:17.761545   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.761551   54466 command_runner.go:130] >         "value": "65535"
	I1205 00:30:17.761565   54466 command_runner.go:130] >       },
	I1205 00:30:17.761576   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.761583   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.761592   54466 command_runner.go:130] >       "pinned": true
	I1205 00:30:17.761597   54466 command_runner.go:130] >     }
	I1205 00:30:17.761604   54466 command_runner.go:130] >   ]
	I1205 00:30:17.761607   54466 command_runner.go:130] > }
	I1205 00:30:17.761861   54466 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:30:17.761878   54466 crio.go:433] Images already preloaded, skipping extraction
	I1205 00:30:17.761936   54466 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:30:17.798485   54466 command_runner.go:130] > {
	I1205 00:30:17.798510   54466 command_runner.go:130] >   "images": [
	I1205 00:30:17.798515   54466 command_runner.go:130] >     {
	I1205 00:30:17.798522   54466 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 00:30:17.798527   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798532   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 00:30:17.798536   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798540   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798549   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 00:30:17.798556   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 00:30:17.798560   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798565   54466 command_runner.go:130] >       "size": "94965812",
	I1205 00:30:17.798570   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798574   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.798590   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798594   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798597   54466 command_runner.go:130] >     },
	I1205 00:30:17.798601   54466 command_runner.go:130] >     {
	I1205 00:30:17.798610   54466 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 00:30:17.798614   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798619   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 00:30:17.798622   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798634   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798642   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 00:30:17.798649   54466 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 00:30:17.798653   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798658   54466 command_runner.go:130] >       "size": "94958644",
	I1205 00:30:17.798662   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798669   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.798673   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798677   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798680   54466 command_runner.go:130] >     },
	I1205 00:30:17.798683   54466 command_runner.go:130] >     {
	I1205 00:30:17.798689   54466 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 00:30:17.798694   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798699   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 00:30:17.798702   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798706   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798714   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 00:30:17.798721   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 00:30:17.798727   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798731   54466 command_runner.go:130] >       "size": "1363676",
	I1205 00:30:17.798736   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798740   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.798747   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798751   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798754   54466 command_runner.go:130] >     },
	I1205 00:30:17.798758   54466 command_runner.go:130] >     {
	I1205 00:30:17.798766   54466 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 00:30:17.798770   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798775   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 00:30:17.798779   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798783   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798791   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 00:30:17.798806   54466 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 00:30:17.798817   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798824   54466 command_runner.go:130] >       "size": "31470524",
	I1205 00:30:17.798828   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798834   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.798839   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798845   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798849   54466 command_runner.go:130] >     },
	I1205 00:30:17.798854   54466 command_runner.go:130] >     {
	I1205 00:30:17.798861   54466 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 00:30:17.798867   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798872   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 00:30:17.798879   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798883   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798893   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 00:30:17.798902   54466 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 00:30:17.798908   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798912   54466 command_runner.go:130] >       "size": "63273227",
	I1205 00:30:17.798918   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.798922   54466 command_runner.go:130] >       "username": "nonroot",
	I1205 00:30:17.798926   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.798932   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.798935   54466 command_runner.go:130] >     },
	I1205 00:30:17.798941   54466 command_runner.go:130] >     {
	I1205 00:30:17.798947   54466 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 00:30:17.798953   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.798958   54466 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 00:30:17.798963   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798967   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.798976   54466 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 00:30:17.798985   54466 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 00:30:17.798991   54466 command_runner.go:130] >       ],
	I1205 00:30:17.798995   54466 command_runner.go:130] >       "size": "149009664",
	I1205 00:30:17.799001   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799010   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.799017   54466 command_runner.go:130] >       },
	I1205 00:30:17.799022   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799031   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799038   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799041   54466 command_runner.go:130] >     },
	I1205 00:30:17.799047   54466 command_runner.go:130] >     {
	I1205 00:30:17.799053   54466 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 00:30:17.799060   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799065   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 00:30:17.799071   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799075   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799085   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 00:30:17.799094   54466 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 00:30:17.799100   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799104   54466 command_runner.go:130] >       "size": "95274464",
	I1205 00:30:17.799110   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799114   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.799120   54466 command_runner.go:130] >       },
	I1205 00:30:17.799124   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799130   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799134   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799139   54466 command_runner.go:130] >     },
	I1205 00:30:17.799143   54466 command_runner.go:130] >     {
	I1205 00:30:17.799151   54466 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 00:30:17.799157   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799162   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 00:30:17.799168   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799172   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799196   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 00:30:17.799206   54466 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 00:30:17.799209   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799213   54466 command_runner.go:130] >       "size": "89474374",
	I1205 00:30:17.799222   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799228   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.799231   54466 command_runner.go:130] >       },
	I1205 00:30:17.799238   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799242   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799249   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799252   54466 command_runner.go:130] >     },
	I1205 00:30:17.799258   54466 command_runner.go:130] >     {
	I1205 00:30:17.799272   54466 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 00:30:17.799278   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799283   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 00:30:17.799289   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799293   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799303   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 00:30:17.799314   54466 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 00:30:17.799320   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799324   54466 command_runner.go:130] >       "size": "92783513",
	I1205 00:30:17.799330   54466 command_runner.go:130] >       "uid": null,
	I1205 00:30:17.799334   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799340   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799344   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799349   54466 command_runner.go:130] >     },
	I1205 00:30:17.799352   54466 command_runner.go:130] >     {
	I1205 00:30:17.799359   54466 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 00:30:17.799365   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799370   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 00:30:17.799376   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799380   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799390   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 00:30:17.799399   54466 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 00:30:17.799405   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799409   54466 command_runner.go:130] >       "size": "68457798",
	I1205 00:30:17.799415   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799424   54466 command_runner.go:130] >         "value": "0"
	I1205 00:30:17.799430   54466 command_runner.go:130] >       },
	I1205 00:30:17.799434   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799440   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799444   54466 command_runner.go:130] >       "pinned": false
	I1205 00:30:17.799450   54466 command_runner.go:130] >     },
	I1205 00:30:17.799453   54466 command_runner.go:130] >     {
	I1205 00:30:17.799461   54466 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 00:30:17.799468   54466 command_runner.go:130] >       "repoTags": [
	I1205 00:30:17.799472   54466 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 00:30:17.799478   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799482   54466 command_runner.go:130] >       "repoDigests": [
	I1205 00:30:17.799489   54466 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 00:30:17.799500   54466 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 00:30:17.799506   54466 command_runner.go:130] >       ],
	I1205 00:30:17.799510   54466 command_runner.go:130] >       "size": "742080",
	I1205 00:30:17.799516   54466 command_runner.go:130] >       "uid": {
	I1205 00:30:17.799521   54466 command_runner.go:130] >         "value": "65535"
	I1205 00:30:17.799526   54466 command_runner.go:130] >       },
	I1205 00:30:17.799530   54466 command_runner.go:130] >       "username": "",
	I1205 00:30:17.799536   54466 command_runner.go:130] >       "spec": null,
	I1205 00:30:17.799542   54466 command_runner.go:130] >       "pinned": true
	I1205 00:30:17.799548   54466 command_runner.go:130] >     }
	I1205 00:30:17.799553   54466 command_runner.go:130] >   ]
	I1205 00:30:17.799558   54466 command_runner.go:130] > }
	I1205 00:30:17.799689   54466 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:30:17.799701   54466 cache_images.go:84] Images are preloaded, skipping loading
	I1205 00:30:17.799708   54466 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.31.2 crio true true} ...
	I1205 00:30:17.799809   54466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-591074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-591074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 00:30:17.799873   54466 ssh_runner.go:195] Run: crio config
	I1205 00:30:17.841237   54466 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 00:30:17.841283   54466 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 00:30:17.841293   54466 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 00:30:17.841299   54466 command_runner.go:130] > #
	I1205 00:30:17.841308   54466 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 00:30:17.841315   54466 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 00:30:17.841321   54466 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 00:30:17.841328   54466 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 00:30:17.841331   54466 command_runner.go:130] > # reload'.
	I1205 00:30:17.841337   54466 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 00:30:17.841347   54466 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 00:30:17.841361   54466 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 00:30:17.841371   54466 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 00:30:17.841378   54466 command_runner.go:130] > [crio]
	I1205 00:30:17.841386   54466 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 00:30:17.841393   54466 command_runner.go:130] > # containers images, in this directory.
	I1205 00:30:17.841399   54466 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 00:30:17.841418   54466 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 00:30:17.841432   54466 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 00:30:17.841446   54466 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 00:30:17.841581   54466 command_runner.go:130] > # imagestore = ""
	I1205 00:30:17.841606   54466 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 00:30:17.841617   54466 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 00:30:17.841729   54466 command_runner.go:130] > storage_driver = "overlay"
	I1205 00:30:17.841745   54466 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 00:30:17.841754   54466 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 00:30:17.841761   54466 command_runner.go:130] > storage_option = [
	I1205 00:30:17.841937   54466 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 00:30:17.842304   54466 command_runner.go:130] > ]
	I1205 00:30:17.842345   54466 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 00:30:17.842357   54466 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 00:30:17.842408   54466 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 00:30:17.842427   54466 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 00:30:17.842437   54466 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 00:30:17.842447   54466 command_runner.go:130] > # always happen on a node reboot
	I1205 00:30:17.842675   54466 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 00:30:17.842699   54466 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 00:30:17.842710   54466 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 00:30:17.842717   54466 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 00:30:17.842843   54466 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1205 00:30:17.842861   54466 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 00:30:17.842873   54466 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 00:30:17.843013   54466 command_runner.go:130] > # internal_wipe = true
	I1205 00:30:17.843026   54466 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 00:30:17.843031   54466 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 00:30:17.843164   54466 command_runner.go:130] > # internal_repair = false
	I1205 00:30:17.843179   54466 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 00:30:17.843189   54466 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 00:30:17.843198   54466 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 00:30:17.843379   54466 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 00:30:17.843391   54466 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 00:30:17.843396   54466 command_runner.go:130] > [crio.api]
	I1205 00:30:17.843401   54466 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 00:30:17.843562   54466 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 00:30:17.843576   54466 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 00:30:17.843834   54466 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 00:30:17.843853   54466 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 00:30:17.843862   54466 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 00:30:17.844053   54466 command_runner.go:130] > # stream_port = "0"
	I1205 00:30:17.844073   54466 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 00:30:17.844174   54466 command_runner.go:130] > # stream_enable_tls = false
	I1205 00:30:17.844193   54466 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 00:30:17.844332   54466 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 00:30:17.844352   54466 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 00:30:17.844363   54466 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 00:30:17.844369   54466 command_runner.go:130] > # minutes.
	I1205 00:30:17.844461   54466 command_runner.go:130] > # stream_tls_cert = ""
	I1205 00:30:17.844482   54466 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 00:30:17.844495   54466 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 00:30:17.844672   54466 command_runner.go:130] > # stream_tls_key = ""
	I1205 00:30:17.844687   54466 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 00:30:17.844698   54466 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 00:30:17.844724   54466 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 00:30:17.844840   54466 command_runner.go:130] > # stream_tls_ca = ""
	I1205 00:30:17.844858   54466 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 00:30:17.844967   54466 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 00:30:17.844984   54466 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 00:30:17.845072   54466 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 00:30:17.845086   54466 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 00:30:17.845095   54466 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 00:30:17.845104   54466 command_runner.go:130] > [crio.runtime]
	I1205 00:30:17.845118   54466 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 00:30:17.845128   54466 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 00:30:17.845138   54466 command_runner.go:130] > # "nofile=1024:2048"
	I1205 00:30:17.845151   54466 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 00:30:17.845181   54466 command_runner.go:130] > # default_ulimits = [
	I1205 00:30:17.845308   54466 command_runner.go:130] > # ]
	I1205 00:30:17.845335   54466 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 00:30:17.845489   54466 command_runner.go:130] > # no_pivot = false
	I1205 00:30:17.845564   54466 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 00:30:17.845581   54466 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 00:30:17.845840   54466 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 00:30:17.845856   54466 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 00:30:17.845864   54466 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 00:30:17.845876   54466 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 00:30:17.845887   54466 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 00:30:17.845895   54466 command_runner.go:130] > # Cgroup setting for conmon
	I1205 00:30:17.845908   54466 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 00:30:17.845917   54466 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 00:30:17.845949   54466 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 00:30:17.845961   54466 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 00:30:17.845972   54466 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 00:30:17.845979   54466 command_runner.go:130] > conmon_env = [
	I1205 00:30:17.845991   54466 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 00:30:17.846000   54466 command_runner.go:130] > ]
	I1205 00:30:17.846013   54466 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 00:30:17.846025   54466 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 00:30:17.846037   54466 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 00:30:17.846045   54466 command_runner.go:130] > # default_env = [
	I1205 00:30:17.846051   54466 command_runner.go:130] > # ]
	I1205 00:30:17.846061   54466 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 00:30:17.846077   54466 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 00:30:17.846084   54466 command_runner.go:130] > # selinux = false
	I1205 00:30:17.846099   54466 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 00:30:17.846112   54466 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 00:30:17.846126   54466 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 00:30:17.846136   54466 command_runner.go:130] > # seccomp_profile = ""
	I1205 00:30:17.846148   54466 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 00:30:17.846162   54466 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 00:30:17.846175   54466 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 00:30:17.846186   54466 command_runner.go:130] > # which might increase security.
	I1205 00:30:17.846198   54466 command_runner.go:130] > # This option is currently deprecated,
	I1205 00:30:17.846212   54466 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1205 00:30:17.846223   54466 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 00:30:17.846235   54466 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 00:30:17.846249   54466 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 00:30:17.846263   54466 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 00:30:17.846279   54466 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 00:30:17.846298   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.846372   54466 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 00:30:17.846386   54466 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 00:30:17.846396   54466 command_runner.go:130] > # the cgroup blockio controller.
	I1205 00:30:17.846421   54466 command_runner.go:130] > # blockio_config_file = ""
	I1205 00:30:17.846436   54466 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 00:30:17.846444   54466 command_runner.go:130] > # blockio parameters.
	I1205 00:30:17.846459   54466 command_runner.go:130] > # blockio_reload = false
	I1205 00:30:17.846473   54466 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 00:30:17.846482   54466 command_runner.go:130] > # irqbalance daemon.
	I1205 00:30:17.846494   54466 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 00:30:17.846508   54466 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 00:30:17.846523   54466 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 00:30:17.846538   54466 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 00:30:17.846555   54466 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 00:30:17.846569   54466 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 00:30:17.846578   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.846588   54466 command_runner.go:130] > # rdt_config_file = ""
	I1205 00:30:17.846598   54466 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 00:30:17.846610   54466 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 00:30:17.846650   54466 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 00:30:17.846660   54466 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 00:30:17.846671   54466 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 00:30:17.846685   54466 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 00:30:17.846695   54466 command_runner.go:130] > # will be added.
	I1205 00:30:17.846705   54466 command_runner.go:130] > # default_capabilities = [
	I1205 00:30:17.846714   54466 command_runner.go:130] > # 	"CHOWN",
	I1205 00:30:17.846721   54466 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 00:30:17.846731   54466 command_runner.go:130] > # 	"FSETID",
	I1205 00:30:17.846738   54466 command_runner.go:130] > # 	"FOWNER",
	I1205 00:30:17.846748   54466 command_runner.go:130] > # 	"SETGID",
	I1205 00:30:17.846756   54466 command_runner.go:130] > # 	"SETUID",
	I1205 00:30:17.846764   54466 command_runner.go:130] > # 	"SETPCAP",
	I1205 00:30:17.846774   54466 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 00:30:17.846780   54466 command_runner.go:130] > # 	"KILL",
	I1205 00:30:17.846789   54466 command_runner.go:130] > # ]
	I1205 00:30:17.846803   54466 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 00:30:17.846823   54466 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 00:30:17.846836   54466 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 00:30:17.846850   54466 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 00:30:17.846930   54466 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 00:30:17.846944   54466 command_runner.go:130] > default_sysctls = [
	I1205 00:30:17.846952   54466 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 00:30:17.846961   54466 command_runner.go:130] > ]
	I1205 00:30:17.846971   54466 command_runner.go:130] > # List of devices on the host that a
	I1205 00:30:17.846985   54466 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 00:30:17.846994   54466 command_runner.go:130] > # allowed_devices = [
	I1205 00:30:17.847000   54466 command_runner.go:130] > # 	"/dev/fuse",
	I1205 00:30:17.847005   54466 command_runner.go:130] > # ]
	I1205 00:30:17.847013   54466 command_runner.go:130] > # List of additional devices. specified as
	I1205 00:30:17.847029   54466 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 00:30:17.847042   54466 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 00:30:17.847052   54466 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 00:30:17.847063   54466 command_runner.go:130] > # additional_devices = [
	I1205 00:30:17.847072   54466 command_runner.go:130] > # ]
	I1205 00:30:17.847081   54466 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 00:30:17.847099   54466 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 00:30:17.847109   54466 command_runner.go:130] > # 	"/etc/cdi",
	I1205 00:30:17.847116   54466 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 00:30:17.847124   54466 command_runner.go:130] > # ]
	I1205 00:30:17.847136   54466 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 00:30:17.847149   54466 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 00:30:17.847156   54466 command_runner.go:130] > # Defaults to false.
	I1205 00:30:17.847168   54466 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 00:30:17.847182   54466 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 00:30:17.847196   54466 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 00:30:17.847206   54466 command_runner.go:130] > # hooks_dir = [
	I1205 00:30:17.847219   54466 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 00:30:17.847227   54466 command_runner.go:130] > # ]
	I1205 00:30:17.847238   54466 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 00:30:17.847330   54466 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 00:30:17.847359   54466 command_runner.go:130] > # its default mounts from the following two files:
	I1205 00:30:17.847365   54466 command_runner.go:130] > #
	I1205 00:30:17.847373   54466 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 00:30:17.847383   54466 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 00:30:17.847392   54466 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 00:30:17.847401   54466 command_runner.go:130] > #
	I1205 00:30:17.847413   54466 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 00:30:17.847427   54466 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 00:30:17.847442   54466 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 00:30:17.847454   54466 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 00:30:17.847462   54466 command_runner.go:130] > #
	I1205 00:30:17.847471   54466 command_runner.go:130] > # default_mounts_file = ""
	I1205 00:30:17.847482   54466 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 00:30:17.847494   54466 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 00:30:17.847504   54466 command_runner.go:130] > pids_limit = 1024
	I1205 00:30:17.847517   54466 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 00:30:17.847531   54466 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 00:30:17.847546   54466 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 00:30:17.847564   54466 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 00:30:17.847574   54466 command_runner.go:130] > # log_size_max = -1
	I1205 00:30:17.847587   54466 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 00:30:17.847596   54466 command_runner.go:130] > # log_to_journald = false
	I1205 00:30:17.847609   54466 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 00:30:17.847621   54466 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 00:30:17.847641   54466 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 00:30:17.847654   54466 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 00:30:17.847667   54466 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 00:30:17.847678   54466 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 00:30:17.847688   54466 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 00:30:17.847697   54466 command_runner.go:130] > # read_only = false
	I1205 00:30:17.847709   54466 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 00:30:17.847723   54466 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 00:30:17.847743   54466 command_runner.go:130] > # live configuration reload.
	I1205 00:30:17.847753   54466 command_runner.go:130] > # log_level = "info"
	I1205 00:30:17.847764   54466 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 00:30:17.847776   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.847787   54466 command_runner.go:130] > # log_filter = ""
	I1205 00:30:17.847798   54466 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 00:30:17.847812   54466 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 00:30:17.847822   54466 command_runner.go:130] > # separated by comma.
	I1205 00:30:17.847836   54466 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 00:30:17.847847   54466 command_runner.go:130] > # uid_mappings = ""
	I1205 00:30:17.847861   54466 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 00:30:17.847874   54466 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 00:30:17.847885   54466 command_runner.go:130] > # separated by comma.
	I1205 00:30:17.847899   54466 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 00:30:17.847908   54466 command_runner.go:130] > # gid_mappings = ""
	I1205 00:30:17.847919   54466 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 00:30:17.847933   54466 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 00:30:17.847947   54466 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 00:30:17.847963   54466 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 00:30:17.847970   54466 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 00:30:17.848017   54466 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 00:30:17.848030   54466 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 00:30:17.848043   54466 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 00:30:17.848056   54466 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 00:30:17.848066   54466 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 00:30:17.848077   54466 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 00:30:17.848090   54466 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 00:30:17.848103   54466 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 00:30:17.848119   54466 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 00:30:17.848131   54466 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 00:30:17.848144   54466 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 00:30:17.848153   54466 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 00:30:17.848164   54466 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 00:30:17.848184   54466 command_runner.go:130] > drop_infra_ctr = false
	I1205 00:30:17.848199   54466 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 00:30:17.848218   54466 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 00:30:17.848232   54466 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 00:30:17.848239   54466 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 00:30:17.848254   54466 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 00:30:17.848263   54466 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 00:30:17.848273   54466 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 00:30:17.848284   54466 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 00:30:17.848294   54466 command_runner.go:130] > # shared_cpuset = ""
	I1205 00:30:17.848303   54466 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 00:30:17.848315   54466 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 00:30:17.848322   54466 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 00:30:17.848337   54466 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 00:30:17.848348   54466 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 00:30:17.848358   54466 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 00:30:17.848371   54466 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 00:30:17.848382   54466 command_runner.go:130] > # enable_criu_support = false
	I1205 00:30:17.848390   54466 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 00:30:17.848403   54466 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 00:30:17.848421   54466 command_runner.go:130] > # enable_pod_events = false
	I1205 00:30:17.848435   54466 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 00:30:17.848445   54466 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 00:30:17.848457   54466 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 00:30:17.848466   54466 command_runner.go:130] > # default_runtime = "runc"
	I1205 00:30:17.848475   54466 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 00:30:17.848489   54466 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 00:30:17.848507   54466 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 00:30:17.848518   54466 command_runner.go:130] > # creation as a file is not desired either.
	I1205 00:30:17.848537   54466 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 00:30:17.848554   54466 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 00:30:17.848561   54466 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 00:30:17.848567   54466 command_runner.go:130] > # ]
	I1205 00:30:17.848587   54466 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 00:30:17.848598   54466 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 00:30:17.848611   54466 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 00:30:17.848623   54466 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 00:30:17.848632   54466 command_runner.go:130] > #
	I1205 00:30:17.848643   54466 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 00:30:17.848653   54466 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 00:30:17.848683   54466 command_runner.go:130] > # runtime_type = "oci"
	I1205 00:30:17.848694   54466 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 00:30:17.848705   54466 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 00:30:17.848717   54466 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 00:30:17.848725   54466 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 00:30:17.848734   54466 command_runner.go:130] > # monitor_env = []
	I1205 00:30:17.848745   54466 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 00:30:17.848755   54466 command_runner.go:130] > # allowed_annotations = []
	I1205 00:30:17.848767   54466 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 00:30:17.848773   54466 command_runner.go:130] > # Where:
	I1205 00:30:17.848782   54466 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 00:30:17.848796   54466 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 00:30:17.848810   54466 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 00:30:17.848824   54466 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 00:30:17.848834   54466 command_runner.go:130] > #   in $PATH.
	I1205 00:30:17.848846   54466 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 00:30:17.848857   54466 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 00:30:17.848869   54466 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 00:30:17.848877   54466 command_runner.go:130] > #   state.
	I1205 00:30:17.848884   54466 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 00:30:17.848896   54466 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 00:30:17.848910   54466 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 00:30:17.848923   54466 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 00:30:17.848936   54466 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 00:30:17.848949   54466 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 00:30:17.848959   54466 command_runner.go:130] > #   The currently recognized values are:
	I1205 00:30:17.848982   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 00:30:17.848998   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 00:30:17.849014   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 00:30:17.849024   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 00:30:17.849040   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 00:30:17.849055   54466 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 00:30:17.849070   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 00:30:17.849083   54466 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 00:30:17.849092   54466 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 00:30:17.849104   54466 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 00:30:17.849115   54466 command_runner.go:130] > #   deprecated option "conmon".
	I1205 00:30:17.849130   54466 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 00:30:17.849143   54466 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 00:30:17.849158   54466 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 00:30:17.849169   54466 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 00:30:17.849180   54466 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1205 00:30:17.849190   54466 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 00:30:17.849204   54466 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 00:30:17.849217   54466 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 00:30:17.849226   54466 command_runner.go:130] > #
	I1205 00:30:17.849238   54466 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 00:30:17.849246   54466 command_runner.go:130] > #
	I1205 00:30:17.849256   54466 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 00:30:17.849270   54466 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 00:30:17.849279   54466 command_runner.go:130] > #
	I1205 00:30:17.849289   54466 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 00:30:17.849301   54466 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 00:30:17.849309   54466 command_runner.go:130] > #
	I1205 00:30:17.849323   54466 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 00:30:17.849332   54466 command_runner.go:130] > # feature.
	I1205 00:30:17.849340   54466 command_runner.go:130] > #
	I1205 00:30:17.849351   54466 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 00:30:17.849363   54466 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 00:30:17.849376   54466 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 00:30:17.849388   54466 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 00:30:17.849402   54466 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 00:30:17.849411   54466 command_runner.go:130] > #
	I1205 00:30:17.849424   54466 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 00:30:17.849440   54466 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 00:30:17.849449   54466 command_runner.go:130] > #
	I1205 00:30:17.849460   54466 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 00:30:17.849470   54466 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 00:30:17.849478   54466 command_runner.go:130] > #
	I1205 00:30:17.849490   54466 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 00:30:17.849503   54466 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 00:30:17.849513   54466 command_runner.go:130] > # limitation.
	I1205 00:30:17.849523   54466 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 00:30:17.849533   54466 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 00:30:17.849543   54466 command_runner.go:130] > runtime_type = "oci"
	I1205 00:30:17.849553   54466 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 00:30:17.849561   54466 command_runner.go:130] > runtime_config_path = ""
	I1205 00:30:17.849568   54466 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 00:30:17.849578   54466 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 00:30:17.849588   54466 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 00:30:17.849595   54466 command_runner.go:130] > monitor_env = [
	I1205 00:30:17.849609   54466 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 00:30:17.849618   54466 command_runner.go:130] > ]
	I1205 00:30:17.849629   54466 command_runner.go:130] > privileged_without_host_devices = false
	I1205 00:30:17.849642   54466 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 00:30:17.849653   54466 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 00:30:17.849663   54466 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 00:30:17.849679   54466 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 00:30:17.849695   54466 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 00:30:17.849709   54466 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 00:30:17.849726   54466 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 00:30:17.849742   54466 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 00:30:17.849755   54466 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 00:30:17.849765   54466 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 00:30:17.849772   54466 command_runner.go:130] > # Example:
	I1205 00:30:17.849779   54466 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 00:30:17.849788   54466 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 00:30:17.849796   54466 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 00:30:17.849804   54466 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 00:30:17.849809   54466 command_runner.go:130] > # cpuset = 0
	I1205 00:30:17.849814   54466 command_runner.go:130] > # cpushares = "0-1"
	I1205 00:30:17.849820   54466 command_runner.go:130] > # Where:
	I1205 00:30:17.849831   54466 command_runner.go:130] > # The workload name is workload-type.
	I1205 00:30:17.849841   54466 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 00:30:17.849847   54466 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 00:30:17.849853   54466 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 00:30:17.849868   54466 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 00:30:17.849877   54466 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 00:30:17.849885   54466 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 00:30:17.849896   54466 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 00:30:17.849903   54466 command_runner.go:130] > # Default value is set to true
	I1205 00:30:17.849911   54466 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 00:30:17.849920   54466 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 00:30:17.849927   54466 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 00:30:17.849933   54466 command_runner.go:130] > # Default value is set to 'false'
	I1205 00:30:17.849937   54466 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 00:30:17.849965   54466 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 00:30:17.849980   54466 command_runner.go:130] > #
	I1205 00:30:17.849993   54466 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 00:30:17.850006   54466 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 00:30:17.850019   54466 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 00:30:17.850028   54466 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 00:30:17.850042   54466 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 00:30:17.850052   54466 command_runner.go:130] > [crio.image]
	I1205 00:30:17.850062   54466 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 00:30:17.850073   54466 command_runner.go:130] > # default_transport = "docker://"
	I1205 00:30:17.850086   54466 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 00:30:17.850099   54466 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 00:30:17.850109   54466 command_runner.go:130] > # global_auth_file = ""
	I1205 00:30:17.850120   54466 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 00:30:17.850129   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.850136   54466 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1205 00:30:17.850151   54466 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 00:30:17.850165   54466 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 00:30:17.850177   54466 command_runner.go:130] > # This option supports live configuration reload.
	I1205 00:30:17.850188   54466 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 00:30:17.850200   54466 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 00:30:17.850212   54466 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 00:30:17.850228   54466 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 00:30:17.850237   54466 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 00:30:17.850247   54466 command_runner.go:130] > # pause_command = "/pause"
	I1205 00:30:17.850261   54466 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 00:30:17.850274   54466 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 00:30:17.850287   54466 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 00:30:17.850300   54466 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 00:30:17.850311   54466 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 00:30:17.850325   54466 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 00:30:17.850334   54466 command_runner.go:130] > # pinned_images = [
	I1205 00:30:17.850341   54466 command_runner.go:130] > # ]
	I1205 00:30:17.850355   54466 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 00:30:17.850365   54466 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 00:30:17.850379   54466 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 00:30:17.850393   54466 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 00:30:17.850405   54466 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 00:30:17.850412   54466 command_runner.go:130] > # signature_policy = ""
	I1205 00:30:17.850424   54466 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 00:30:17.850435   54466 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 00:30:17.850444   54466 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 00:30:17.850456   54466 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 00:30:17.850465   54466 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 00:30:17.850472   54466 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 00:30:17.850481   54466 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 00:30:17.850491   54466 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 00:30:17.850498   54466 command_runner.go:130] > # changing them here.
	I1205 00:30:17.850505   54466 command_runner.go:130] > # insecure_registries = [
	I1205 00:30:17.850512   54466 command_runner.go:130] > # ]
	I1205 00:30:17.850525   54466 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 00:30:17.850538   54466 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 00:30:17.850546   54466 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 00:30:17.850557   54466 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 00:30:17.850564   54466 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 00:30:17.850570   54466 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 00:30:17.850577   54466 command_runner.go:130] > # CNI plugins.
	I1205 00:30:17.850580   54466 command_runner.go:130] > [crio.network]
	I1205 00:30:17.850589   54466 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 00:30:17.850597   54466 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 00:30:17.850604   54466 command_runner.go:130] > # cni_default_network = ""
	I1205 00:30:17.850611   54466 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 00:30:17.850618   54466 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 00:30:17.850626   54466 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 00:30:17.850632   54466 command_runner.go:130] > # plugin_dirs = [
	I1205 00:30:17.850636   54466 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 00:30:17.850642   54466 command_runner.go:130] > # ]
	I1205 00:30:17.850648   54466 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 00:30:17.850654   54466 command_runner.go:130] > [crio.metrics]
	I1205 00:30:17.850660   54466 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 00:30:17.850666   54466 command_runner.go:130] > enable_metrics = true
	I1205 00:30:17.850670   54466 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 00:30:17.850677   54466 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 00:30:17.850683   54466 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 00:30:17.850691   54466 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 00:30:17.850701   54466 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 00:30:17.850708   54466 command_runner.go:130] > # metrics_collectors = [
	I1205 00:30:17.850712   54466 command_runner.go:130] > # 	"operations",
	I1205 00:30:17.850720   54466 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 00:30:17.850724   54466 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 00:30:17.850731   54466 command_runner.go:130] > # 	"operations_errors",
	I1205 00:30:17.850735   54466 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 00:30:17.850742   54466 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 00:30:17.850746   54466 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 00:30:17.850752   54466 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 00:30:17.850756   54466 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 00:30:17.850763   54466 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 00:30:17.850767   54466 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 00:30:17.850773   54466 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 00:30:17.850778   54466 command_runner.go:130] > # 	"containers_oom_total",
	I1205 00:30:17.850784   54466 command_runner.go:130] > # 	"containers_oom",
	I1205 00:30:17.850788   54466 command_runner.go:130] > # 	"processes_defunct",
	I1205 00:30:17.850794   54466 command_runner.go:130] > # 	"operations_total",
	I1205 00:30:17.850799   54466 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 00:30:17.850805   54466 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 00:30:17.850809   54466 command_runner.go:130] > # 	"operations_errors_total",
	I1205 00:30:17.850816   54466 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 00:30:17.850820   54466 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 00:30:17.850827   54466 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 00:30:17.850831   54466 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 00:30:17.850835   54466 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 00:30:17.850841   54466 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 00:30:17.850848   54466 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 00:30:17.850865   54466 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 00:30:17.850877   54466 command_runner.go:130] > # ]
	I1205 00:30:17.850885   54466 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 00:30:17.850890   54466 command_runner.go:130] > # metrics_port = 9090
	I1205 00:30:17.850897   54466 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 00:30:17.850903   54466 command_runner.go:130] > # metrics_socket = ""
	I1205 00:30:17.850911   54466 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 00:30:17.850917   54466 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 00:30:17.850925   54466 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 00:30:17.850930   54466 command_runner.go:130] > # certificate on any modification event.
	I1205 00:30:17.850936   54466 command_runner.go:130] > # metrics_cert = ""
	I1205 00:30:17.850941   54466 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 00:30:17.850948   54466 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 00:30:17.850952   54466 command_runner.go:130] > # metrics_key = ""
	I1205 00:30:17.850961   54466 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 00:30:17.850965   54466 command_runner.go:130] > [crio.tracing]
	I1205 00:30:17.850976   54466 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 00:30:17.850982   54466 command_runner.go:130] > # enable_tracing = false
	I1205 00:30:17.851003   54466 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 00:30:17.851011   54466 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 00:30:17.851018   54466 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 00:30:17.851022   54466 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 00:30:17.851029   54466 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 00:30:17.851033   54466 command_runner.go:130] > [crio.nri]
	I1205 00:30:17.851040   54466 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 00:30:17.851044   54466 command_runner.go:130] > # enable_nri = false
	I1205 00:30:17.851049   54466 command_runner.go:130] > # NRI socket to listen on.
	I1205 00:30:17.851053   54466 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 00:30:17.851057   54466 command_runner.go:130] > # NRI plugin directory to use.
	I1205 00:30:17.851062   54466 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 00:30:17.851066   54466 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 00:30:17.851070   54466 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 00:30:17.851075   54466 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 00:30:17.851079   54466 command_runner.go:130] > # nri_disable_connections = false
	I1205 00:30:17.851084   54466 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 00:30:17.851091   54466 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 00:30:17.851097   54466 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 00:30:17.851103   54466 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 00:30:17.851111   54466 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 00:30:17.851115   54466 command_runner.go:130] > [crio.stats]
	I1205 00:30:17.851122   54466 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 00:30:17.851128   54466 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 00:30:17.851134   54466 command_runner.go:130] > # stats_collection_period = 0
	I1205 00:30:17.851166   54466 command_runner.go:130] ! time="2024-12-05 00:30:17.814450691Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1205 00:30:17.851183   54466 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 00:30:17.851298   54466 cni.go:84] Creating CNI manager for ""
	I1205 00:30:17.851309   54466 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 00:30:17.851316   54466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 00:30:17.851335   54466 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-591074 NodeName:multinode-591074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 00:30:17.851461   54466 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-591074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 00:30:17.851522   54466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 00:30:17.861550   54466 command_runner.go:130] > kubeadm
	I1205 00:30:17.861572   54466 command_runner.go:130] > kubectl
	I1205 00:30:17.861576   54466 command_runner.go:130] > kubelet
	I1205 00:30:17.861661   54466 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 00:30:17.861734   54466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 00:30:17.871681   54466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 00:30:17.888302   54466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 00:30:17.904527   54466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1205 00:30:17.920755   54466 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I1205 00:30:17.924748   54466 command_runner.go:130] > 192.168.39.176	control-plane.minikube.internal
	I1205 00:30:17.924819   54466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:30:18.062202   54466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:30:18.077511   54466 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074 for IP: 192.168.39.176
	I1205 00:30:18.077539   54466 certs.go:194] generating shared ca certs ...
	I1205 00:30:18.077559   54466 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:30:18.077724   54466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1205 00:30:18.077775   54466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1205 00:30:18.077789   54466 certs.go:256] generating profile certs ...
	I1205 00:30:18.077891   54466 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/client.key
	I1205 00:30:18.077988   54466 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.key.738cda39
	I1205 00:30:18.078046   54466 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.key
	I1205 00:30:18.078059   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 00:30:18.078071   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 00:30:18.078083   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 00:30:18.078093   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 00:30:18.078104   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 00:30:18.078114   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 00:30:18.078126   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 00:30:18.078138   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 00:30:18.078184   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1205 00:30:18.078212   54466 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1205 00:30:18.078222   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 00:30:18.078261   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1205 00:30:18.078286   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1205 00:30:18.078310   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1205 00:30:18.078346   54466 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:30:18.078372   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.078386   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem -> /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.078401   54466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.079033   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 00:30:18.101530   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 00:30:18.124472   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 00:30:18.146972   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 00:30:18.170126   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 00:30:18.192726   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 00:30:18.214601   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 00:30:18.236515   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/multinode-591074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 00:30:18.258658   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 00:30:18.281138   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1205 00:30:18.302796   54466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1205 00:30:18.324650   54466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 00:30:18.340314   54466 ssh_runner.go:195] Run: openssl version
	I1205 00:30:18.346073   54466 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1205 00:30:18.346172   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1205 00:30:18.356478   54466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.360515   54466 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.360622   54466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.360680   54466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1205 00:30:18.366047   54466 command_runner.go:130] > 51391683
	I1205 00:30:18.366120   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1205 00:30:18.375638   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1205 00:30:18.385794   54466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.389747   54466 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.389906   54466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.389957   54466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1205 00:30:18.395113   54466 command_runner.go:130] > 3ec20f2e
	I1205 00:30:18.395184   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 00:30:18.404106   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 00:30:18.414224   54466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.418224   54466 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.418447   54466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.418494   54466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:30:18.423607   54466 command_runner.go:130] > b5213941
	I1205 00:30:18.423805   54466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 00:30:18.433456   54466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:30:18.437701   54466 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:30:18.437727   54466 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 00:30:18.437733   54466 command_runner.go:130] > Device: 253,1	Inode: 8385582     Links: 1
	I1205 00:30:18.437739   54466 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 00:30:18.437745   54466 command_runner.go:130] > Access: 2024-12-05 00:23:28.167861905 +0000
	I1205 00:30:18.437750   54466 command_runner.go:130] > Modify: 2024-12-05 00:23:28.167861905 +0000
	I1205 00:30:18.437757   54466 command_runner.go:130] > Change: 2024-12-05 00:23:28.167861905 +0000
	I1205 00:30:18.437764   54466 command_runner.go:130] >  Birth: 2024-12-05 00:23:28.167861905 +0000
	I1205 00:30:18.437820   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 00:30:18.443176   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.443253   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 00:30:18.448416   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.448573   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 00:30:18.453672   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.453876   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 00:30:18.459259   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.459344   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 00:30:18.464585   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.464650   54466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 00:30:18.469875   54466 command_runner.go:130] > Certificate will not expire
	I1205 00:30:18.469963   54466 kubeadm.go:392] StartCluster: {Name:multinode-591074 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-591074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:30:18.470086   54466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 00:30:18.470138   54466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:30:18.541088   54466 command_runner.go:130] > 11bddc006567c030c5056b59664a40db900aebbb6d98e5782c4b942e18c1624d
	I1205 00:30:18.541121   54466 command_runner.go:130] > f38c0e934e2c97ceefdf38fc780932bd818678a2301a31195a27b0d132b7ff46
	I1205 00:30:18.541129   54466 command_runner.go:130] > dfec96603c687760fb699c4bc5ff8a14e85642a6128d6caae3f470cb125d4116
	I1205 00:30:18.541138   54466 command_runner.go:130] > a1906ff20d3426137e839ca45d6f42b4dcb7e4bd1f39080d967333ceccc3302d
	I1205 00:30:18.541147   54466 command_runner.go:130] > 384cbec2fc8e9d2a44bc883049f95d0c93d52bce3c682d0af92c07dc8fb19f58
	I1205 00:30:18.541155   54466 command_runner.go:130] > 1a11d940106d859696de5b98854555726c5002a9b30b82f879cc6b6ba76bf879
	I1205 00:30:18.541163   54466 command_runner.go:130] > cc88ace22ccdb015b766b0ff94f86b4f1454d86644677834fbafe424b4bf3bb6
	I1205 00:30:18.541177   54466 command_runner.go:130] > 9126a5b156206e62a1f6ba394b3da766590d9772a20d32046b01ad201273b7bc
	I1205 00:30:18.541206   54466 cri.go:89] found id: "11bddc006567c030c5056b59664a40db900aebbb6d98e5782c4b942e18c1624d"
	I1205 00:30:18.541219   54466 cri.go:89] found id: "f38c0e934e2c97ceefdf38fc780932bd818678a2301a31195a27b0d132b7ff46"
	I1205 00:30:18.541224   54466 cri.go:89] found id: "dfec96603c687760fb699c4bc5ff8a14e85642a6128d6caae3f470cb125d4116"
	I1205 00:30:18.541229   54466 cri.go:89] found id: "a1906ff20d3426137e839ca45d6f42b4dcb7e4bd1f39080d967333ceccc3302d"
	I1205 00:30:18.541233   54466 cri.go:89] found id: "384cbec2fc8e9d2a44bc883049f95d0c93d52bce3c682d0af92c07dc8fb19f58"
	I1205 00:30:18.541240   54466 cri.go:89] found id: "1a11d940106d859696de5b98854555726c5002a9b30b82f879cc6b6ba76bf879"
	I1205 00:30:18.541244   54466 cri.go:89] found id: "cc88ace22ccdb015b766b0ff94f86b4f1454d86644677834fbafe424b4bf3bb6"
	I1205 00:30:18.541248   54466 cri.go:89] found id: "9126a5b156206e62a1f6ba394b3da766590d9772a20d32046b01ad201273b7bc"
	I1205 00:30:18.541255   54466 cri.go:89] found id: ""
	I1205 00:30:18.541308   54466 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 00:30:18.593717   54466 command_runner.go:130] ! load container cb51b07c586cd0d31246296e4528a6fe36a997b52d690de97104caeffcdca8aa: container does not exist

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-591074 -n multinode-591074
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-591074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.17s)

                                                
                                    
x
+
TestPreload (170.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-792543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-792543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.528621471s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-792543 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-792543 image pull gcr.io/k8s-minikube/busybox: (3.386884935s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-792543
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-792543: (7.292885179s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-792543 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1205 00:40:07.017843   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-792543 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.964641651s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-792543 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-05 00:41:09.18140379 +0000 UTC m=+5398.945203870
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-792543 -n test-preload-792543
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-792543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-792543 logs -n 25: (1.008462471s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074 sudo cat                                       | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m03_multinode-591074.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt                       | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m02:/home/docker/cp-test_multinode-591074-m03_multinode-591074-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n                                                                 | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | multinode-591074-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-591074 ssh -n multinode-591074-m02 sudo cat                                   | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-591074-m03_multinode-591074-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-591074 node stop m03                                                          | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:25 UTC |
	| node    | multinode-591074 node start                                                             | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:25 UTC | 05 Dec 24 00:26 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-591074                                                                | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:26 UTC |                     |
	| stop    | -p multinode-591074                                                                     | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:26 UTC |                     |
	| start   | -p multinode-591074                                                                     | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:28 UTC | 05 Dec 24 00:32 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-591074                                                                | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:32 UTC |                     |
	| node    | multinode-591074 node delete                                                            | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:32 UTC | 05 Dec 24 00:32 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-591074 stop                                                                   | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:32 UTC |                     |
	| start   | -p multinode-591074                                                                     | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:34 UTC | 05 Dec 24 00:37 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-591074                                                                | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:37 UTC |                     |
	| start   | -p multinode-591074-m02                                                                 | multinode-591074-m02 | jenkins | v1.34.0 | 05 Dec 24 00:37 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-591074-m03                                                                 | multinode-591074-m03 | jenkins | v1.34.0 | 05 Dec 24 00:37 UTC | 05 Dec 24 00:38 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-591074                                                                 | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:38 UTC |                     |
	| delete  | -p multinode-591074-m03                                                                 | multinode-591074-m03 | jenkins | v1.34.0 | 05 Dec 24 00:38 UTC | 05 Dec 24 00:38 UTC |
	| delete  | -p multinode-591074                                                                     | multinode-591074     | jenkins | v1.34.0 | 05 Dec 24 00:38 UTC | 05 Dec 24 00:38 UTC |
	| start   | -p test-preload-792543                                                                  | test-preload-792543  | jenkins | v1.34.0 | 05 Dec 24 00:38 UTC | 05 Dec 24 00:39 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-792543 image pull                                                          | test-preload-792543  | jenkins | v1.34.0 | 05 Dec 24 00:39 UTC | 05 Dec 24 00:39 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-792543                                                                  | test-preload-792543  | jenkins | v1.34.0 | 05 Dec 24 00:39 UTC | 05 Dec 24 00:40 UTC |
	| start   | -p test-preload-792543                                                                  | test-preload-792543  | jenkins | v1.34.0 | 05 Dec 24 00:40 UTC | 05 Dec 24 00:41 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-792543 image list                                                          | test-preload-792543  | jenkins | v1.34.0 | 05 Dec 24 00:41 UTC | 05 Dec 24 00:41 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 00:40:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 00:40:02.039225   58736 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:40:02.039361   58736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:40:02.039371   58736 out.go:358] Setting ErrFile to fd 2...
	I1205 00:40:02.039375   58736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:40:02.039549   58736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:40:02.040103   58736 out.go:352] Setting JSON to false
	I1205 00:40:02.040987   58736 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8548,"bootTime":1733350654,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 00:40:02.041100   58736 start.go:139] virtualization: kvm guest
	I1205 00:40:02.043435   58736 out.go:177] * [test-preload-792543] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 00:40:02.044771   58736 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:40:02.044832   58736 notify.go:220] Checking for updates...
	I1205 00:40:02.047299   58736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:40:02.048591   58736 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:40:02.049975   58736 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:40:02.051309   58736 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 00:40:02.052563   58736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:40:02.054161   58736 config.go:182] Loaded profile config "test-preload-792543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 00:40:02.054596   58736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:40:02.054654   58736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:40:02.069462   58736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I1205 00:40:02.069977   58736 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:40:02.070597   58736 main.go:141] libmachine: Using API Version  1
	I1205 00:40:02.070621   58736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:40:02.070963   58736 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:40:02.071135   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:02.073132   58736 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 00:40:02.074366   58736 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:40:02.074657   58736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:40:02.074708   58736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:40:02.089209   58736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33929
	I1205 00:40:02.089636   58736 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:40:02.090132   58736 main.go:141] libmachine: Using API Version  1
	I1205 00:40:02.090155   58736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:40:02.090453   58736 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:40:02.090634   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:02.127207   58736 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 00:40:02.128615   58736 start.go:297] selected driver: kvm2
	I1205 00:40:02.128639   58736 start.go:901] validating driver "kvm2" against &{Name:test-preload-792543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-792543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:40:02.128751   58736 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:40:02.129501   58736 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:40:02.129591   58736 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 00:40:02.144522   58736 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 00:40:02.144904   58736 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 00:40:02.144931   58736 cni.go:84] Creating CNI manager for ""
	I1205 00:40:02.144970   58736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 00:40:02.145033   58736 start.go:340] cluster config:
	{Name:test-preload-792543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-792543 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:40:02.145145   58736 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:40:02.146937   58736 out.go:177] * Starting "test-preload-792543" primary control-plane node in "test-preload-792543" cluster
	I1205 00:40:02.148219   58736 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 00:40:02.251575   58736 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1205 00:40:02.251606   58736 cache.go:56] Caching tarball of preloaded images
	I1205 00:40:02.251765   58736 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 00:40:02.253461   58736 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1205 00:40:02.254694   58736 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 00:40:02.356881   58736 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1205 00:40:13.756167   58736 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 00:40:13.756261   58736 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 00:40:14.591735   58736 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1205 00:40:14.591858   58736 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/config.json ...
	I1205 00:40:14.592101   58736 start.go:360] acquireMachinesLock for test-preload-792543: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 00:40:14.592165   58736 start.go:364] duration metric: took 41.685µs to acquireMachinesLock for "test-preload-792543"
	I1205 00:40:14.592176   58736 start.go:96] Skipping create...Using existing machine configuration
	I1205 00:40:14.592184   58736 fix.go:54] fixHost starting: 
	I1205 00:40:14.592438   58736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:40:14.592470   58736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:40:14.606758   58736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39009
	I1205 00:40:14.607249   58736 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:40:14.607713   58736 main.go:141] libmachine: Using API Version  1
	I1205 00:40:14.607743   58736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:40:14.608088   58736 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:40:14.608282   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:14.608465   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetState
	I1205 00:40:14.610093   58736 fix.go:112] recreateIfNeeded on test-preload-792543: state=Stopped err=<nil>
	I1205 00:40:14.610125   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	W1205 00:40:14.610279   58736 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 00:40:14.611935   58736 out.go:177] * Restarting existing kvm2 VM for "test-preload-792543" ...
	I1205 00:40:14.612909   58736 main.go:141] libmachine: (test-preload-792543) Calling .Start
	I1205 00:40:14.613099   58736 main.go:141] libmachine: (test-preload-792543) Ensuring networks are active...
	I1205 00:40:14.613823   58736 main.go:141] libmachine: (test-preload-792543) Ensuring network default is active
	I1205 00:40:14.614307   58736 main.go:141] libmachine: (test-preload-792543) Ensuring network mk-test-preload-792543 is active
	I1205 00:40:14.614831   58736 main.go:141] libmachine: (test-preload-792543) Getting domain xml...
	I1205 00:40:14.615660   58736 main.go:141] libmachine: (test-preload-792543) Creating domain...
	I1205 00:40:15.807510   58736 main.go:141] libmachine: (test-preload-792543) Waiting to get IP...
	I1205 00:40:15.808597   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:15.808957   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:15.809054   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:15.808969   58820 retry.go:31] will retry after 211.406461ms: waiting for machine to come up
	I1205 00:40:16.022384   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:16.022832   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:16.022855   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:16.022797   58820 retry.go:31] will retry after 351.638721ms: waiting for machine to come up
	I1205 00:40:16.376561   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:16.376999   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:16.377024   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:16.376959   58820 retry.go:31] will retry after 437.150315ms: waiting for machine to come up
	I1205 00:40:16.815727   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:16.816090   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:16.816116   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:16.816051   58820 retry.go:31] will retry after 539.033065ms: waiting for machine to come up
	I1205 00:40:17.356609   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:17.357058   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:17.357084   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:17.357025   58820 retry.go:31] will retry after 714.091825ms: waiting for machine to come up
	I1205 00:40:18.072837   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:18.073294   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:18.073318   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:18.073250   58820 retry.go:31] will retry after 676.270641ms: waiting for machine to come up
	I1205 00:40:18.751250   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:18.751681   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:18.751728   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:18.751627   58820 retry.go:31] will retry after 1.026896225s: waiting for machine to come up
	I1205 00:40:19.780139   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:19.780572   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:19.780600   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:19.780526   58820 retry.go:31] will retry after 933.163671ms: waiting for machine to come up
	I1205 00:40:20.715523   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:20.715879   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:20.715897   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:20.715837   58820 retry.go:31] will retry after 1.668631818s: waiting for machine to come up
	I1205 00:40:22.386604   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:22.387012   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:22.387040   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:22.386971   58820 retry.go:31] will retry after 2.057068879s: waiting for machine to come up
	I1205 00:40:24.447248   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:24.447705   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:24.447735   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:24.447656   58820 retry.go:31] will retry after 1.769480358s: waiting for machine to come up
	I1205 00:40:26.219008   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:26.219487   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:26.219517   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:26.219449   58820 retry.go:31] will retry after 2.767703647s: waiting for machine to come up
	I1205 00:40:28.990369   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:28.990792   58736 main.go:141] libmachine: (test-preload-792543) DBG | unable to find current IP address of domain test-preload-792543 in network mk-test-preload-792543
	I1205 00:40:28.990822   58736 main.go:141] libmachine: (test-preload-792543) DBG | I1205 00:40:28.990730   58820 retry.go:31] will retry after 4.391210678s: waiting for machine to come up
	I1205 00:40:33.386703   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.387180   58736 main.go:141] libmachine: (test-preload-792543) Found IP for machine: 192.168.39.177
	I1205 00:40:33.387209   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has current primary IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.387219   58736 main.go:141] libmachine: (test-preload-792543) Reserving static IP address...
	I1205 00:40:33.387649   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "test-preload-792543", mac: "52:54:00:a1:78:c8", ip: "192.168.39.177"} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:33.387666   58736 main.go:141] libmachine: (test-preload-792543) Reserved static IP address: 192.168.39.177
	I1205 00:40:33.387683   58736 main.go:141] libmachine: (test-preload-792543) DBG | skip adding static IP to network mk-test-preload-792543 - found existing host DHCP lease matching {name: "test-preload-792543", mac: "52:54:00:a1:78:c8", ip: "192.168.39.177"}
	I1205 00:40:33.387697   58736 main.go:141] libmachine: (test-preload-792543) DBG | Getting to WaitForSSH function...
	I1205 00:40:33.387712   58736 main.go:141] libmachine: (test-preload-792543) Waiting for SSH to be available...
	I1205 00:40:33.389703   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.389969   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:33.389999   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.390120   58736 main.go:141] libmachine: (test-preload-792543) DBG | Using SSH client type: external
	I1205 00:40:33.390139   58736 main.go:141] libmachine: (test-preload-792543) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/test-preload-792543/id_rsa (-rw-------)
	I1205 00:40:33.390171   58736 main.go:141] libmachine: (test-preload-792543) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/test-preload-792543/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 00:40:33.390179   58736 main.go:141] libmachine: (test-preload-792543) DBG | About to run SSH command:
	I1205 00:40:33.390190   58736 main.go:141] libmachine: (test-preload-792543) DBG | exit 0
	I1205 00:40:33.514474   58736 main.go:141] libmachine: (test-preload-792543) DBG | SSH cmd err, output: <nil>: 
	I1205 00:40:33.514809   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetConfigRaw
	I1205 00:40:33.515444   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetIP
	I1205 00:40:33.518056   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.518362   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:33.518393   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.518586   58736 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/config.json ...
	I1205 00:40:33.518785   58736 machine.go:93] provisionDockerMachine start ...
	I1205 00:40:33.518803   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:33.519027   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:33.521622   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.522009   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:33.522041   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.522253   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:33.522425   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:33.522575   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:33.522747   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:33.522903   58736 main.go:141] libmachine: Using SSH client type: native
	I1205 00:40:33.523154   58736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1205 00:40:33.523171   58736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 00:40:33.626072   58736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 00:40:33.626104   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetMachineName
	I1205 00:40:33.626351   58736 buildroot.go:166] provisioning hostname "test-preload-792543"
	I1205 00:40:33.626376   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetMachineName
	I1205 00:40:33.626538   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:33.629102   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.629390   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:33.629417   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.629537   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:33.629720   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:33.629876   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:33.630027   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:33.630195   58736 main.go:141] libmachine: Using SSH client type: native
	I1205 00:40:33.630412   58736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1205 00:40:33.630438   58736 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-792543 && echo "test-preload-792543" | sudo tee /etc/hostname
	I1205 00:40:33.748001   58736 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-792543
	
	I1205 00:40:33.748028   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:33.750959   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.751313   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:33.751344   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.751526   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:33.751679   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:33.751870   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:33.752011   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:33.752170   58736 main.go:141] libmachine: Using SSH client type: native
	I1205 00:40:33.752335   58736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1205 00:40:33.752351   58736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-792543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-792543/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-792543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 00:40:33.862297   58736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 00:40:33.862334   58736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1205 00:40:33.862366   58736 buildroot.go:174] setting up certificates
	I1205 00:40:33.862376   58736 provision.go:84] configureAuth start
	I1205 00:40:33.862385   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetMachineName
	I1205 00:40:33.862653   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetIP
	I1205 00:40:33.865224   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.865546   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:33.865578   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.865772   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:33.867982   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.868284   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:33.868300   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:33.868400   58736 provision.go:143] copyHostCerts
	I1205 00:40:33.868453   58736 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1205 00:40:33.868462   58736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:40:33.868528   58736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1205 00:40:33.868624   58736 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1205 00:40:33.868631   58736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:40:33.868657   58736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1205 00:40:33.868735   58736 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1205 00:40:33.868746   58736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:40:33.868781   58736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1205 00:40:33.868864   58736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.test-preload-792543 san=[127.0.0.1 192.168.39.177 localhost minikube test-preload-792543]
	I1205 00:40:34.040771   58736 provision.go:177] copyRemoteCerts
	I1205 00:40:34.040829   58736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 00:40:34.040853   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:34.043740   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.044213   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:34.044245   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.044495   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:34.044687   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:34.044879   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:34.045074   58736 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/test-preload-792543/id_rsa Username:docker}
	I1205 00:40:34.128110   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 00:40:34.151419   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 00:40:34.173407   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 00:40:34.195350   58736 provision.go:87] duration metric: took 332.961749ms to configureAuth
	I1205 00:40:34.195378   58736 buildroot.go:189] setting minikube options for container-runtime
	I1205 00:40:34.195534   58736 config.go:182] Loaded profile config "test-preload-792543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 00:40:34.195597   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:34.198075   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.198424   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:34.198453   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.198660   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:34.198831   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:34.198967   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:34.199074   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:34.199210   58736 main.go:141] libmachine: Using SSH client type: native
	I1205 00:40:34.199379   58736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1205 00:40:34.199406   58736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 00:40:34.421450   58736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 00:40:34.421481   58736 machine.go:96] duration metric: took 902.682787ms to provisionDockerMachine
	I1205 00:40:34.421500   58736 start.go:293] postStartSetup for "test-preload-792543" (driver="kvm2")
	I1205 00:40:34.421514   58736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 00:40:34.421537   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:34.421875   58736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 00:40:34.421914   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:34.424787   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.425122   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:34.425151   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.425316   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:34.425515   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:34.425697   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:34.425824   58736 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/test-preload-792543/id_rsa Username:docker}
	I1205 00:40:34.508290   58736 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 00:40:34.512524   58736 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 00:40:34.512558   58736 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1205 00:40:34.512632   58736 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1205 00:40:34.512720   58736 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1205 00:40:34.512831   58736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 00:40:34.521951   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:40:34.545905   58736 start.go:296] duration metric: took 124.388253ms for postStartSetup
	I1205 00:40:34.545982   58736 fix.go:56] duration metric: took 19.953796614s for fixHost
	I1205 00:40:34.546016   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:34.548812   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.549196   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:34.549228   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.549365   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:34.549564   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:34.549688   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:34.549793   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:34.550059   58736 main.go:141] libmachine: Using SSH client type: native
	I1205 00:40:34.550219   58736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1205 00:40:34.550229   58736 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 00:40:34.654505   58736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733359234.623580983
	
	I1205 00:40:34.654535   58736 fix.go:216] guest clock: 1733359234.623580983
	I1205 00:40:34.654543   58736 fix.go:229] Guest: 2024-12-05 00:40:34.623580983 +0000 UTC Remote: 2024-12-05 00:40:34.545997648 +0000 UTC m=+32.545341607 (delta=77.583335ms)
	I1205 00:40:34.654573   58736 fix.go:200] guest clock delta is within tolerance: 77.583335ms
	I1205 00:40:34.654583   58736 start.go:83] releasing machines lock for "test-preload-792543", held for 20.062407707s
	I1205 00:40:34.654608   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:34.654946   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetIP
	I1205 00:40:34.657606   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.657973   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:34.658005   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.658183   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:34.658689   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:34.658866   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:34.658972   58736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 00:40:34.659014   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:34.659049   58736 ssh_runner.go:195] Run: cat /version.json
	I1205 00:40:34.659074   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:34.661867   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.662115   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.662182   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:34.662211   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.662315   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:34.662515   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:34.662525   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:34.662541   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:34.662671   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:34.662722   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:34.662804   58736 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/test-preload-792543/id_rsa Username:docker}
	I1205 00:40:34.662868   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:34.662998   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:34.663157   58736 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/test-preload-792543/id_rsa Username:docker}
	I1205 00:40:34.763502   58736 ssh_runner.go:195] Run: systemctl --version
	I1205 00:40:34.769519   58736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 00:40:34.914052   58736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 00:40:34.919669   58736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 00:40:34.919729   58736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 00:40:34.935480   58736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 00:40:34.935504   58736 start.go:495] detecting cgroup driver to use...
	I1205 00:40:34.935556   58736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 00:40:34.951629   58736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 00:40:34.965683   58736 docker.go:217] disabling cri-docker service (if available) ...
	I1205 00:40:34.965739   58736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 00:40:34.978938   58736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 00:40:34.991911   58736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 00:40:35.100624   58736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 00:40:35.254865   58736 docker.go:233] disabling docker service ...
	I1205 00:40:35.254927   58736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 00:40:35.269019   58736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 00:40:35.281602   58736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 00:40:35.404944   58736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 00:40:35.528879   58736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 00:40:35.542341   58736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 00:40:35.559785   58736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1205 00:40:35.559847   58736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:40:35.570128   58736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 00:40:35.570191   58736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:40:35.580053   58736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:40:35.590181   58736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:40:35.599827   58736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 00:40:35.609922   58736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:40:35.619616   58736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:40:35.635710   58736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:40:35.645494   58736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 00:40:35.654250   58736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 00:40:35.654323   58736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 00:40:35.668250   58736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 00:40:35.680809   58736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:40:35.804760   58736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 00:40:35.888248   58736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 00:40:35.888341   58736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 00:40:35.892705   58736 start.go:563] Will wait 60s for crictl version
	I1205 00:40:35.892762   58736 ssh_runner.go:195] Run: which crictl
	I1205 00:40:35.896023   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 00:40:35.933458   58736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 00:40:35.933534   58736 ssh_runner.go:195] Run: crio --version
	I1205 00:40:35.960252   58736 ssh_runner.go:195] Run: crio --version
	I1205 00:40:35.990047   58736 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1205 00:40:35.991531   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetIP
	I1205 00:40:35.994194   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:35.994534   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:35.994564   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:35.994738   58736 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 00:40:35.998652   58736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 00:40:36.010034   58736 kubeadm.go:883] updating cluster {Name:test-preload-792543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-792543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 00:40:36.010143   58736 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 00:40:36.010182   58736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:40:36.043984   58736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1205 00:40:36.044047   58736 ssh_runner.go:195] Run: which lz4
	I1205 00:40:36.047980   58736 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 00:40:36.051754   58736 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 00:40:36.051782   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1205 00:40:37.386238   58736 crio.go:462] duration metric: took 1.338288389s to copy over tarball
	I1205 00:40:37.386317   58736 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 00:40:39.695357   58736 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.309004621s)
	I1205 00:40:39.695387   58736 crio.go:469] duration metric: took 2.30912147s to extract the tarball
	I1205 00:40:39.695395   58736 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 00:40:39.736853   58736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:40:39.777369   58736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1205 00:40:39.777397   58736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 00:40:39.777471   58736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 00:40:39.777471   58736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 00:40:39.777483   58736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 00:40:39.777561   58736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 00:40:39.777571   58736 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 00:40:39.777611   58736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 00:40:39.777567   58736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 00:40:39.777545   58736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 00:40:39.778862   58736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 00:40:39.779016   58736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 00:40:39.779017   58736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 00:40:39.779075   58736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 00:40:39.779105   58736 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 00:40:39.779076   58736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 00:40:39.779018   58736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 00:40:39.779018   58736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 00:40:39.959524   58736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1205 00:40:39.998489   58736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1205 00:40:39.998532   58736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 00:40:39.998570   58736 ssh_runner.go:195] Run: which crictl
	I1205 00:40:40.000047   58736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1205 00:40:40.002842   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 00:40:40.009358   58736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1205 00:40:40.011598   58736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1205 00:40:40.017587   58736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1205 00:40:40.023693   58736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 00:40:40.036494   58736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1205 00:40:40.105052   58736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1205 00:40:40.105101   58736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 00:40:40.105152   58736 ssh_runner.go:195] Run: which crictl
	I1205 00:40:40.105162   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 00:40:40.152681   58736 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1205 00:40:40.152723   58736 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1205 00:40:40.152769   58736 ssh_runner.go:195] Run: which crictl
	I1205 00:40:40.184888   58736 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1205 00:40:40.184939   58736 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 00:40:40.184989   58736 ssh_runner.go:195] Run: which crictl
	I1205 00:40:40.184996   58736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1205 00:40:40.185028   58736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 00:40:40.185039   58736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1205 00:40:40.185071   58736 ssh_runner.go:195] Run: which crictl
	I1205 00:40:40.185069   58736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 00:40:40.185183   58736 ssh_runner.go:195] Run: which crictl
	I1205 00:40:40.198623   58736 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1205 00:40:40.198665   58736 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1205 00:40:40.198708   58736 ssh_runner.go:195] Run: which crictl
	I1205 00:40:40.198728   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 00:40:40.198808   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 00:40:40.198842   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 00:40:40.198878   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 00:40:40.198910   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 00:40:40.199273   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 00:40:40.228237   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 00:40:40.292880   58736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1205 00:40:40.292992   58736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 00:40:40.332391   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 00:40:40.333152   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 00:40:40.333248   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 00:40:40.337802   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 00:40:40.344816   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 00:40:40.370110   58736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1205 00:40:40.370136   58736 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 00:40:40.370178   58736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 00:40:40.370122   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 00:40:40.426432   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 00:40:40.426559   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 00:40:40.492676   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 00:40:40.507485   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 00:40:40.514802   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 00:40:40.975632   58736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 00:40:43.477302   58736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.107086008s)
	I1205 00:40:43.477338   58736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1205 00:40:43.477396   58736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.107146418s)
	I1205 00:40:43.477440   58736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (3.050973634s)
	I1205 00:40:43.477475   58736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1205 00:40:43.477476   58736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 00:40:43.477487   58736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (3.050905584s)
	I1205 00:40:43.477520   58736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1205 00:40:43.477557   58736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 00:40:43.477562   58736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.9848572s)
	I1205 00:40:43.477599   58736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 00:40:43.477599   58736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1205 00:40:43.477682   58736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (2.962857503s)
	I1205 00:40:43.477698   58736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1205 00:40:43.477709   58736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 00:40:43.477640   58736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.970129698s)
	I1205 00:40:43.477769   58736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1205 00:40:43.477776   58736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1205 00:40:43.477852   58736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 00:40:43.518077   58736 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.542406802s)
	I1205 00:40:43.518138   58736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1205 00:40:43.518146   58736 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 00:40:43.518142   58736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1205 00:40:43.518164   58736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1205 00:40:43.518173   58736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1205 00:40:43.518189   58736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1205 00:40:43.518195   58736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 00:40:43.518212   58736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1205 00:40:43.518307   58736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1205 00:40:44.257151   58736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1205 00:40:44.257204   58736 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 00:40:44.257222   58736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1205 00:40:44.257255   58736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 00:40:45.106418   58736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1205 00:40:45.106471   58736 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1205 00:40:45.106513   58736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1205 00:40:47.251962   58736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.14542768s)
	I1205 00:40:47.252006   58736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1205 00:40:47.252036   58736 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1205 00:40:47.252079   58736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1205 00:40:47.590363   58736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1205 00:40:47.590419   58736 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 00:40:47.590491   58736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 00:40:48.240274   58736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1205 00:40:48.240311   58736 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1205 00:40:48.240375   58736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1205 00:40:48.382395   58736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1205 00:40:48.382442   58736 cache_images.go:123] Successfully loaded all cached images
	I1205 00:40:48.382450   58736 cache_images.go:92] duration metric: took 8.605038536s to LoadCachedImages
	I1205 00:40:48.382465   58736 kubeadm.go:934] updating node { 192.168.39.177 8443 v1.24.4 crio true true} ...
	I1205 00:40:48.382565   58736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-792543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-792543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 00:40:48.382649   58736 ssh_runner.go:195] Run: crio config
	I1205 00:40:48.425980   58736 cni.go:84] Creating CNI manager for ""
	I1205 00:40:48.426006   58736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 00:40:48.426018   58736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 00:40:48.426041   58736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-792543 NodeName:test-preload-792543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 00:40:48.426186   58736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-792543"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 00:40:48.426242   58736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1205 00:40:48.435384   58736 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 00:40:48.435450   58736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 00:40:48.444090   58736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1205 00:40:48.459897   58736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 00:40:48.474942   58736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1205 00:40:48.491108   58736 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I1205 00:40:48.494607   58736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 00:40:48.508277   58736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:40:48.615149   58736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:40:48.630419   58736 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543 for IP: 192.168.39.177
	I1205 00:40:48.630441   58736 certs.go:194] generating shared ca certs ...
	I1205 00:40:48.630455   58736 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:40:48.630600   58736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1205 00:40:48.630636   58736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1205 00:40:48.630642   58736 certs.go:256] generating profile certs ...
	I1205 00:40:48.630715   58736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/client.key
	I1205 00:40:48.630773   58736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/apiserver.key.e96694ff
	I1205 00:40:48.630806   58736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/proxy-client.key
	I1205 00:40:48.630911   58736 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1205 00:40:48.630940   58736 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1205 00:40:48.630950   58736 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 00:40:48.630975   58736 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1205 00:40:48.630997   58736 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1205 00:40:48.631019   58736 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1205 00:40:48.631068   58736 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:40:48.631666   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 00:40:48.676738   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 00:40:48.709373   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 00:40:48.741281   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 00:40:48.776002   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 00:40:48.811886   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 00:40:48.852767   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 00:40:48.875640   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 00:40:48.897917   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 00:40:48.919729   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1205 00:40:48.941626   58736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1205 00:40:48.964151   58736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 00:40:48.980023   58736 ssh_runner.go:195] Run: openssl version
	I1205 00:40:48.985702   58736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 00:40:48.996343   58736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:40:49.000656   58736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:40:49.000719   58736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:40:49.006624   58736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 00:40:49.017701   58736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1205 00:40:49.028443   58736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1205 00:40:49.032846   58736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:40:49.032920   58736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1205 00:40:49.038459   58736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1205 00:40:49.049212   58736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1205 00:40:49.060026   58736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1205 00:40:49.064225   58736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:40:49.064287   58736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1205 00:40:49.069742   58736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 00:40:49.080491   58736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:40:49.084876   58736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 00:40:49.090848   58736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 00:40:49.096709   58736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 00:40:49.102762   58736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 00:40:49.108742   58736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 00:40:49.114673   58736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 00:40:49.120180   58736 kubeadm.go:392] StartCluster: {Name:test-preload-792543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-792543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:40:49.120300   58736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 00:40:49.120364   58736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:40:49.157672   58736 cri.go:89] found id: ""
	I1205 00:40:49.157749   58736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 00:40:49.168131   58736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 00:40:49.168155   58736 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 00:40:49.168207   58736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 00:40:49.177977   58736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 00:40:49.178524   58736 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-792543" does not appear in /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:40:49.178651   58736 kubeconfig.go:62] /home/jenkins/minikube-integration/20045-14126/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-792543" cluster setting kubeconfig missing "test-preload-792543" context setting]
	I1205 00:40:49.178948   58736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/kubeconfig: {Name:mkdcd2310e2ff57e3c85e12df202da266115cf1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:40:49.179601   58736 kapi.go:59] client config for test-preload-792543: &rest.Config{Host:"https://192.168.39.177:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 00:40:49.180217   58736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 00:40:49.189671   58736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.177
	I1205 00:40:49.189701   58736 kubeadm.go:1160] stopping kube-system containers ...
	I1205 00:40:49.189711   58736 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 00:40:49.189768   58736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:40:49.224223   58736 cri.go:89] found id: ""
	I1205 00:40:49.224310   58736 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 00:40:49.241106   58736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 00:40:49.250780   58736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 00:40:49.250802   58736 kubeadm.go:157] found existing configuration files:
	
	I1205 00:40:49.250844   58736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 00:40:49.261218   58736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 00:40:49.261277   58736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 00:40:49.271989   58736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 00:40:49.282192   58736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 00:40:49.282260   58736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 00:40:49.292580   58736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 00:40:49.301179   58736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 00:40:49.301253   58736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 00:40:49.310590   58736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 00:40:49.319697   58736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 00:40:49.319767   58736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 00:40:49.329104   58736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 00:40:49.338744   58736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 00:40:49.435181   58736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 00:40:50.010233   58736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 00:40:50.279310   58736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 00:40:50.353670   58736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 00:40:50.478259   58736 api_server.go:52] waiting for apiserver process to appear ...
	I1205 00:40:50.478383   58736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 00:40:50.978411   58736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 00:40:51.479252   58736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 00:40:51.496370   58736 api_server.go:72] duration metric: took 1.018111081s to wait for apiserver process to appear ...
	I1205 00:40:51.496399   58736 api_server.go:88] waiting for apiserver healthz status ...
	I1205 00:40:51.496418   58736 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1205 00:40:51.496831   58736 api_server.go:269] stopped: https://192.168.39.177:8443/healthz: Get "https://192.168.39.177:8443/healthz": dial tcp 192.168.39.177:8443: connect: connection refused
	I1205 00:40:51.997497   58736 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1205 00:40:55.543323   58736 api_server.go:279] https://192.168.39.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 00:40:55.543375   58736 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 00:40:55.543389   58736 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1205 00:40:55.595377   58736 api_server.go:279] https://192.168.39.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 00:40:55.595415   58736 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 00:40:55.996857   58736 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1205 00:40:56.002095   58736 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 00:40:56.002122   58736 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 00:40:56.496694   58736 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1205 00:40:56.501349   58736 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 00:40:56.501381   58736 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 00:40:56.996905   58736 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1205 00:40:57.002092   58736 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I1205 00:40:57.008162   58736 api_server.go:141] control plane version: v1.24.4
	I1205 00:40:57.008188   58736 api_server.go:131] duration metric: took 5.51178266s to wait for apiserver health ...
	I1205 00:40:57.008197   58736 cni.go:84] Creating CNI manager for ""
	I1205 00:40:57.008203   58736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 00:40:57.010108   58736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 00:40:57.011539   58736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 00:40:57.028904   58736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 00:40:57.060230   58736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 00:40:57.060365   58736 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 00:40:57.060390   58736 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 00:40:57.069195   58736 system_pods.go:59] 7 kube-system pods found
	I1205 00:40:57.069227   58736 system_pods.go:61] "coredns-6d4b75cb6d-ddbf9" [7e1fdd0d-1c2f-437f-a567-7da9a53e391e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 00:40:57.069233   58736 system_pods.go:61] "etcd-test-preload-792543" [8d8c48c3-5a91-4164-9acd-20a81c99f1de] Running
	I1205 00:40:57.069239   58736 system_pods.go:61] "kube-apiserver-test-preload-792543" [d2d654c8-5f66-43d1-a59e-1202c58dc1a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 00:40:57.069247   58736 system_pods.go:61] "kube-controller-manager-test-preload-792543" [9a786fb7-d21d-4a61-83fe-0797eaae0211] Running
	I1205 00:40:57.069254   58736 system_pods.go:61] "kube-proxy-zrk58" [336db2a7-3df1-430f-97ca-9fada4b5efa3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 00:40:57.069257   58736 system_pods.go:61] "kube-scheduler-test-preload-792543" [d860a6f4-3a91-4cad-8d70-2aa4d47dc133] Running
	I1205 00:40:57.069262   58736 system_pods.go:61] "storage-provisioner" [e6777cfb-a832-47bd-8706-4ac341ba8b79] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 00:40:57.069269   58736 system_pods.go:74] duration metric: took 9.009701ms to wait for pod list to return data ...
	I1205 00:40:57.069278   58736 node_conditions.go:102] verifying NodePressure condition ...
	I1205 00:40:57.074726   58736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 00:40:57.074749   58736 node_conditions.go:123] node cpu capacity is 2
	I1205 00:40:57.074758   58736 node_conditions.go:105] duration metric: took 5.476361ms to run NodePressure ...
	I1205 00:40:57.074778   58736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 00:40:57.287834   58736 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 00:40:57.292317   58736 kubeadm.go:739] kubelet initialised
	I1205 00:40:57.292336   58736 kubeadm.go:740] duration metric: took 4.477275ms waiting for restarted kubelet to initialise ...
	I1205 00:40:57.292343   58736 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 00:40:57.297275   58736 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-ddbf9" in "kube-system" namespace to be "Ready" ...
	I1205 00:40:57.303495   58736 pod_ready.go:98] node "test-preload-792543" hosting pod "coredns-6d4b75cb6d-ddbf9" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.303522   58736 pod_ready.go:82] duration metric: took 6.221126ms for pod "coredns-6d4b75cb6d-ddbf9" in "kube-system" namespace to be "Ready" ...
	E1205 00:40:57.303533   58736 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-792543" hosting pod "coredns-6d4b75cb6d-ddbf9" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.303542   58736 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:40:57.309551   58736 pod_ready.go:98] node "test-preload-792543" hosting pod "etcd-test-preload-792543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.309579   58736 pod_ready.go:82] duration metric: took 6.027573ms for pod "etcd-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	E1205 00:40:57.309591   58736 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-792543" hosting pod "etcd-test-preload-792543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.309600   58736 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:40:57.315429   58736 pod_ready.go:98] node "test-preload-792543" hosting pod "kube-apiserver-test-preload-792543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.315454   58736 pod_ready.go:82] duration metric: took 5.839318ms for pod "kube-apiserver-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	E1205 00:40:57.315462   58736 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-792543" hosting pod "kube-apiserver-test-preload-792543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.315471   58736 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:40:57.464911   58736 pod_ready.go:98] node "test-preload-792543" hosting pod "kube-controller-manager-test-preload-792543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.464947   58736 pod_ready.go:82] duration metric: took 149.462758ms for pod "kube-controller-manager-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	E1205 00:40:57.464962   58736 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-792543" hosting pod "kube-controller-manager-test-preload-792543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.464971   58736 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zrk58" in "kube-system" namespace to be "Ready" ...
	I1205 00:40:57.864405   58736 pod_ready.go:98] node "test-preload-792543" hosting pod "kube-proxy-zrk58" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.864430   58736 pod_ready.go:82] duration metric: took 399.449294ms for pod "kube-proxy-zrk58" in "kube-system" namespace to be "Ready" ...
	E1205 00:40:57.864440   58736 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-792543" hosting pod "kube-proxy-zrk58" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:57.864446   58736 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:40:58.264561   58736 pod_ready.go:98] node "test-preload-792543" hosting pod "kube-scheduler-test-preload-792543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:58.264585   58736 pod_ready.go:82] duration metric: took 400.133989ms for pod "kube-scheduler-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	E1205 00:40:58.264594   58736 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-792543" hosting pod "kube-scheduler-test-preload-792543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-792543" has status "Ready":"False"
	I1205 00:40:58.264607   58736 pod_ready.go:39] duration metric: took 972.249957ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 00:40:58.264622   58736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 00:40:58.275794   58736 ops.go:34] apiserver oom_adj: -16
	I1205 00:40:58.275821   58736 kubeadm.go:597] duration metric: took 9.107658697s to restartPrimaryControlPlane
	I1205 00:40:58.275833   58736 kubeadm.go:394] duration metric: took 9.155658019s to StartCluster
	I1205 00:40:58.275851   58736 settings.go:142] acquiring lock: {Name:mkdafc0119272d33e86aa3cd16389b126b5518c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:40:58.275930   58736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:40:58.276593   58736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/kubeconfig: {Name:mkdcd2310e2ff57e3c85e12df202da266115cf1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:40:58.276812   58736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 00:40:58.276866   58736 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 00:40:58.276952   58736 addons.go:69] Setting storage-provisioner=true in profile "test-preload-792543"
	I1205 00:40:58.276976   58736 addons.go:234] Setting addon storage-provisioner=true in "test-preload-792543"
	I1205 00:40:58.276975   58736 addons.go:69] Setting default-storageclass=true in profile "test-preload-792543"
	W1205 00:40:58.276986   58736 addons.go:243] addon storage-provisioner should already be in state true
	I1205 00:40:58.276997   58736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-792543"
	I1205 00:40:58.277020   58736 host.go:66] Checking if "test-preload-792543" exists ...
	I1205 00:40:58.277085   58736 config.go:182] Loaded profile config "test-preload-792543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 00:40:58.277340   58736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:40:58.277379   58736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:40:58.277399   58736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:40:58.277436   58736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:40:58.278720   58736 out.go:177] * Verifying Kubernetes components...
	I1205 00:40:58.280106   58736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:40:58.292546   58736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I1205 00:40:58.293036   58736 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:40:58.293499   58736 main.go:141] libmachine: Using API Version  1
	I1205 00:40:58.293518   58736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:40:58.294034   58736 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:40:58.294510   58736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:40:58.294544   58736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:40:58.296850   58736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I1205 00:40:58.297303   58736 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:40:58.297724   58736 main.go:141] libmachine: Using API Version  1
	I1205 00:40:58.297738   58736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:40:58.298062   58736 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:40:58.298251   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetState
	I1205 00:40:58.300691   58736 kapi.go:59] client config for test-preload-792543: &rest.Config{Host:"https://192.168.39.177:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/client.crt", KeyFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/profiles/test-preload-792543/client.key", CAFile:"/home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 00:40:58.301023   58736 addons.go:234] Setting addon default-storageclass=true in "test-preload-792543"
	W1205 00:40:58.301042   58736 addons.go:243] addon default-storageclass should already be in state true
	I1205 00:40:58.301069   58736 host.go:66] Checking if "test-preload-792543" exists ...
	I1205 00:40:58.301450   58736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:40:58.301489   58736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:40:58.310006   58736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I1205 00:40:58.310390   58736 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:40:58.310850   58736 main.go:141] libmachine: Using API Version  1
	I1205 00:40:58.310875   58736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:40:58.311190   58736 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:40:58.311369   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetState
	I1205 00:40:58.312864   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:58.315110   58736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 00:40:58.315723   58736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41331
	I1205 00:40:58.316121   58736 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:40:58.316551   58736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 00:40:58.316570   58736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 00:40:58.316587   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:58.316605   58736 main.go:141] libmachine: Using API Version  1
	I1205 00:40:58.316625   58736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:40:58.316961   58736 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:40:58.317567   58736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:40:58.317612   58736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:40:58.319698   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:58.320122   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:58.320172   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:58.320284   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:58.320441   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:58.320576   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:58.320672   58736 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/test-preload-792543/id_rsa Username:docker}
	I1205 00:40:58.353124   58736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I1205 00:40:58.353561   58736 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:40:58.354049   58736 main.go:141] libmachine: Using API Version  1
	I1205 00:40:58.354070   58736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:40:58.354698   58736 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:40:58.354905   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetState
	I1205 00:40:58.356547   58736 main.go:141] libmachine: (test-preload-792543) Calling .DriverName
	I1205 00:40:58.356798   58736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 00:40:58.356829   58736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 00:40:58.356849   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHHostname
	I1205 00:40:58.359628   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:58.360033   58736 main.go:141] libmachine: (test-preload-792543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:78:c8", ip: ""} in network mk-test-preload-792543: {Iface:virbr1 ExpiryTime:2024-12-05 01:40:25 +0000 UTC Type:0 Mac:52:54:00:a1:78:c8 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-792543 Clientid:01:52:54:00:a1:78:c8}
	I1205 00:40:58.360072   58736 main.go:141] libmachine: (test-preload-792543) DBG | domain test-preload-792543 has defined IP address 192.168.39.177 and MAC address 52:54:00:a1:78:c8 in network mk-test-preload-792543
	I1205 00:40:58.360148   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHPort
	I1205 00:40:58.360302   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHKeyPath
	I1205 00:40:58.360448   58736 main.go:141] libmachine: (test-preload-792543) Calling .GetSSHUsername
	I1205 00:40:58.360561   58736 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/test-preload-792543/id_rsa Username:docker}
	I1205 00:40:58.461433   58736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:40:58.478641   58736 node_ready.go:35] waiting up to 6m0s for node "test-preload-792543" to be "Ready" ...
	I1205 00:40:58.535389   58736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 00:40:58.560817   58736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 00:40:59.503653   58736 main.go:141] libmachine: Making call to close driver server
	I1205 00:40:59.503679   58736 main.go:141] libmachine: (test-preload-792543) Calling .Close
	I1205 00:40:59.503935   58736 main.go:141] libmachine: Successfully made call to close driver server
	I1205 00:40:59.503955   58736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 00:40:59.503957   58736 main.go:141] libmachine: (test-preload-792543) DBG | Closing plugin on server side
	I1205 00:40:59.503964   58736 main.go:141] libmachine: Making call to close driver server
	I1205 00:40:59.503972   58736 main.go:141] libmachine: (test-preload-792543) Calling .Close
	I1205 00:40:59.504220   58736 main.go:141] libmachine: Successfully made call to close driver server
	I1205 00:40:59.504263   58736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 00:40:59.506146   58736 main.go:141] libmachine: Making call to close driver server
	I1205 00:40:59.506166   58736 main.go:141] libmachine: (test-preload-792543) Calling .Close
	I1205 00:40:59.506451   58736 main.go:141] libmachine: Successfully made call to close driver server
	I1205 00:40:59.506465   58736 main.go:141] libmachine: (test-preload-792543) DBG | Closing plugin on server side
	I1205 00:40:59.506468   58736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 00:40:59.506485   58736 main.go:141] libmachine: Making call to close driver server
	I1205 00:40:59.506496   58736 main.go:141] libmachine: (test-preload-792543) Calling .Close
	I1205 00:40:59.506730   58736 main.go:141] libmachine: Successfully made call to close driver server
	I1205 00:40:59.506750   58736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 00:40:59.506775   58736 main.go:141] libmachine: (test-preload-792543) DBG | Closing plugin on server side
	I1205 00:40:59.512381   58736 main.go:141] libmachine: Making call to close driver server
	I1205 00:40:59.512403   58736 main.go:141] libmachine: (test-preload-792543) Calling .Close
	I1205 00:40:59.512665   58736 main.go:141] libmachine: (test-preload-792543) DBG | Closing plugin on server side
	I1205 00:40:59.512727   58736 main.go:141] libmachine: Successfully made call to close driver server
	I1205 00:40:59.512743   58736 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 00:40:59.515873   58736 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 00:40:59.517035   58736 addons.go:510] duration metric: took 1.2401821s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 00:41:00.482649   58736 node_ready.go:53] node "test-preload-792543" has status "Ready":"False"
	I1205 00:41:02.982861   58736 node_ready.go:53] node "test-preload-792543" has status "Ready":"False"
	I1205 00:41:05.482053   58736 node_ready.go:53] node "test-preload-792543" has status "Ready":"False"
	I1205 00:41:06.482313   58736 node_ready.go:49] node "test-preload-792543" has status "Ready":"True"
	I1205 00:41:06.482340   58736 node_ready.go:38] duration metric: took 8.003667232s for node "test-preload-792543" to be "Ready" ...
	I1205 00:41:06.482349   58736 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 00:41:06.487280   58736 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-ddbf9" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:06.491539   58736 pod_ready.go:93] pod "coredns-6d4b75cb6d-ddbf9" in "kube-system" namespace has status "Ready":"True"
	I1205 00:41:06.491562   58736 pod_ready.go:82] duration metric: took 4.260815ms for pod "coredns-6d4b75cb6d-ddbf9" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:06.491575   58736 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:06.496197   58736 pod_ready.go:93] pod "etcd-test-preload-792543" in "kube-system" namespace has status "Ready":"True"
	I1205 00:41:06.496215   58736 pod_ready.go:82] duration metric: took 4.633035ms for pod "etcd-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:06.496232   58736 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:07.503976   58736 pod_ready.go:93] pod "kube-apiserver-test-preload-792543" in "kube-system" namespace has status "Ready":"True"
	I1205 00:41:07.504001   58736 pod_ready.go:82] duration metric: took 1.007760676s for pod "kube-apiserver-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:07.504010   58736 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:07.508934   58736 pod_ready.go:93] pod "kube-controller-manager-test-preload-792543" in "kube-system" namespace has status "Ready":"True"
	I1205 00:41:07.508957   58736 pod_ready.go:82] duration metric: took 4.941108ms for pod "kube-controller-manager-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:07.508965   58736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrk58" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:07.682340   58736 pod_ready.go:93] pod "kube-proxy-zrk58" in "kube-system" namespace has status "Ready":"True"
	I1205 00:41:07.682366   58736 pod_ready.go:82] duration metric: took 173.391818ms for pod "kube-proxy-zrk58" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:07.682376   58736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:08.082569   58736 pod_ready.go:93] pod "kube-scheduler-test-preload-792543" in "kube-system" namespace has status "Ready":"True"
	I1205 00:41:08.082590   58736 pod_ready.go:82] duration metric: took 400.208011ms for pod "kube-scheduler-test-preload-792543" in "kube-system" namespace to be "Ready" ...
	I1205 00:41:08.082600   58736 pod_ready.go:39] duration metric: took 1.600237505s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 00:41:08.082613   58736 api_server.go:52] waiting for apiserver process to appear ...
	I1205 00:41:08.082672   58736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 00:41:08.099236   58736 api_server.go:72] duration metric: took 9.822396859s to wait for apiserver process to appear ...
	I1205 00:41:08.099261   58736 api_server.go:88] waiting for apiserver healthz status ...
	I1205 00:41:08.099285   58736 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1205 00:41:08.106484   58736 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I1205 00:41:08.107411   58736 api_server.go:141] control plane version: v1.24.4
	I1205 00:41:08.107431   58736 api_server.go:131] duration metric: took 8.163252ms to wait for apiserver health ...
	I1205 00:41:08.107438   58736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 00:41:08.294033   58736 system_pods.go:59] 7 kube-system pods found
	I1205 00:41:08.294070   58736 system_pods.go:61] "coredns-6d4b75cb6d-ddbf9" [7e1fdd0d-1c2f-437f-a567-7da9a53e391e] Running
	I1205 00:41:08.294079   58736 system_pods.go:61] "etcd-test-preload-792543" [8d8c48c3-5a91-4164-9acd-20a81c99f1de] Running
	I1205 00:41:08.294084   58736 system_pods.go:61] "kube-apiserver-test-preload-792543" [d2d654c8-5f66-43d1-a59e-1202c58dc1a8] Running
	I1205 00:41:08.294089   58736 system_pods.go:61] "kube-controller-manager-test-preload-792543" [9a786fb7-d21d-4a61-83fe-0797eaae0211] Running
	I1205 00:41:08.294095   58736 system_pods.go:61] "kube-proxy-zrk58" [336db2a7-3df1-430f-97ca-9fada4b5efa3] Running
	I1205 00:41:08.294100   58736 system_pods.go:61] "kube-scheduler-test-preload-792543" [d860a6f4-3a91-4cad-8d70-2aa4d47dc133] Running
	I1205 00:41:08.294105   58736 system_pods.go:61] "storage-provisioner" [e6777cfb-a832-47bd-8706-4ac341ba8b79] Running
	I1205 00:41:08.294114   58736 system_pods.go:74] duration metric: took 186.668549ms to wait for pod list to return data ...
	I1205 00:41:08.294129   58736 default_sa.go:34] waiting for default service account to be created ...
	I1205 00:41:08.482670   58736 default_sa.go:45] found service account: "default"
	I1205 00:41:08.482693   58736 default_sa.go:55] duration metric: took 188.557674ms for default service account to be created ...
	I1205 00:41:08.482702   58736 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 00:41:08.684923   58736 system_pods.go:86] 7 kube-system pods found
	I1205 00:41:08.684949   58736 system_pods.go:89] "coredns-6d4b75cb6d-ddbf9" [7e1fdd0d-1c2f-437f-a567-7da9a53e391e] Running
	I1205 00:41:08.684955   58736 system_pods.go:89] "etcd-test-preload-792543" [8d8c48c3-5a91-4164-9acd-20a81c99f1de] Running
	I1205 00:41:08.684959   58736 system_pods.go:89] "kube-apiserver-test-preload-792543" [d2d654c8-5f66-43d1-a59e-1202c58dc1a8] Running
	I1205 00:41:08.684963   58736 system_pods.go:89] "kube-controller-manager-test-preload-792543" [9a786fb7-d21d-4a61-83fe-0797eaae0211] Running
	I1205 00:41:08.684966   58736 system_pods.go:89] "kube-proxy-zrk58" [336db2a7-3df1-430f-97ca-9fada4b5efa3] Running
	I1205 00:41:08.684969   58736 system_pods.go:89] "kube-scheduler-test-preload-792543" [d860a6f4-3a91-4cad-8d70-2aa4d47dc133] Running
	I1205 00:41:08.684972   58736 system_pods.go:89] "storage-provisioner" [e6777cfb-a832-47bd-8706-4ac341ba8b79] Running
	I1205 00:41:08.684977   58736 system_pods.go:126] duration metric: took 202.271429ms to wait for k8s-apps to be running ...
	I1205 00:41:08.684983   58736 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 00:41:08.685022   58736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 00:41:08.699239   58736 system_svc.go:56] duration metric: took 14.249419ms WaitForService to wait for kubelet
	I1205 00:41:08.699264   58736 kubeadm.go:582] duration metric: took 10.422428908s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 00:41:08.699282   58736 node_conditions.go:102] verifying NodePressure condition ...
	I1205 00:41:08.883123   58736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 00:41:08.883147   58736 node_conditions.go:123] node cpu capacity is 2
	I1205 00:41:08.883157   58736 node_conditions.go:105] duration metric: took 183.871273ms to run NodePressure ...
	I1205 00:41:08.883168   58736 start.go:241] waiting for startup goroutines ...
	I1205 00:41:08.883174   58736 start.go:246] waiting for cluster config update ...
	I1205 00:41:08.883184   58736 start.go:255] writing updated cluster config ...
	I1205 00:41:08.883433   58736 ssh_runner.go:195] Run: rm -f paused
	I1205 00:41:08.929297   58736 start.go:600] kubectl: 1.31.3, cluster: 1.24.4 (minor skew: 7)
	I1205 00:41:08.930925   58736 out.go:201] 
	W1205 00:41:08.932090   58736 out.go:270] ! /usr/local/bin/kubectl is version 1.31.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1205 00:41:08.933241   58736 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1205 00:41:08.934578   58736 out.go:177] * Done! kubectl is now configured to use "test-preload-792543" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.789256952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733359269789237592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=892865e6-50e9-4bba-8ed2-1ee0fb6830d0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.789688283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bf6bccd-8063-4223-a625-7ddf1fc55e51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.789758086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bf6bccd-8063-4223-a625-7ddf1fc55e51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.789979790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11d818ed3fd09c48f10110ed4e8d04bc78de0853fbd46ffe8e395e5d18bb5372,PodSandboxId:af39fd09c31b61c2cc66e7657115f9fd52d6ae2cc799f6369f33ac107c4a07d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733359264658821897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-ddbf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1fdd0d-1c2f-437f-a567-7da9a53e391e,},Annotations:map[string]string{io.kubernetes.container.hash: 658dc5ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fec63252ca3e43e890671eeb30bc31ed1a62023d4c77bc037419f9612b55c8d,PodSandboxId:18727d22f5dbf164ae2465f9812709e3dae0637016a76745a2329d6be006229b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733359257394258591,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6777cfb-a832-47bd-8706-4ac341ba8b79,},Annotations:map[string]string{io.kubernetes.container.hash: e30252c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc73d3bd5b1f3fd30a0aad668db73c20c55fbcf909b57f33dd75841840cf934,PodSandboxId:f42ad339fa074b75befbbf9381ef4c264f04994a82627ba07a6747f5e85b5df3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733359257131933127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zrk58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33
6db2a7-3df1-430f-97ca-9fada4b5efa3,},Annotations:map[string]string{io.kubernetes.container.hash: 655385b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88438f20a01e78c805fbb6b1052cba99578042321655fadc14dc1ccbc26f739,PodSandboxId:46bed4f38331bc0e23968447fdf1080cee0c348a4dba726b325f8614745a710a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733359251151008194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185e5b1ff83e228bd547db205a683754,},Anno
tations:map[string]string{io.kubernetes.container.hash: 657a8f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6853e8f40e791baa7ec1bcb1a3c2d9bf26a96ff3e01b8898f2566bc098de4250,PodSandboxId:114aeaa402d0ff68761c101724aeda942f085e61f483246d49eb750a2760feb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733359251163282923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a87603b2e10252eef9676
8a333a5c0,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffea7532293431c0b04ed72c317cda12ce6767d925701e1b25ad6b8fe34aaae3,PodSandboxId:e2deb18ec8506e39ba514991f6751fca1afd562e81e75b3069395b6d554f39df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733359251107058774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2622cd95563fc448796765d42e57e61a,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a3b59e66a2089d623a357333ea18224dcc0c65402fbd752538ec2233ef5af4,PodSandboxId:ff502a14987e99ef5cc0b8c4ac1d46bc5474cfad6a85ebff4be5e5cb2c117ef6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733359251078258568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c32bff294fa7dc0f491d09aa87c14f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3e56e4f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bf6bccd-8063-4223-a625-7ddf1fc55e51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.824404013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58a2a9b2-c6ee-48bf-9837-d519459ea23c name=/runtime.v1.RuntimeService/Version
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.824526943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58a2a9b2-c6ee-48bf-9837-d519459ea23c name=/runtime.v1.RuntimeService/Version
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.825781277Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca3a0d9e-5d79-4c1d-9f97-cd97f18de1ff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.826217487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733359269826197606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca3a0d9e-5d79-4c1d-9f97-cd97f18de1ff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.826740235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c4429c9-b376-4d60-ae0b-847f90673e38 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.826802616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c4429c9-b376-4d60-ae0b-847f90673e38 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.826975474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11d818ed3fd09c48f10110ed4e8d04bc78de0853fbd46ffe8e395e5d18bb5372,PodSandboxId:af39fd09c31b61c2cc66e7657115f9fd52d6ae2cc799f6369f33ac107c4a07d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733359264658821897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-ddbf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1fdd0d-1c2f-437f-a567-7da9a53e391e,},Annotations:map[string]string{io.kubernetes.container.hash: 658dc5ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fec63252ca3e43e890671eeb30bc31ed1a62023d4c77bc037419f9612b55c8d,PodSandboxId:18727d22f5dbf164ae2465f9812709e3dae0637016a76745a2329d6be006229b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733359257394258591,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6777cfb-a832-47bd-8706-4ac341ba8b79,},Annotations:map[string]string{io.kubernetes.container.hash: e30252c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc73d3bd5b1f3fd30a0aad668db73c20c55fbcf909b57f33dd75841840cf934,PodSandboxId:f42ad339fa074b75befbbf9381ef4c264f04994a82627ba07a6747f5e85b5df3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733359257131933127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zrk58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33
6db2a7-3df1-430f-97ca-9fada4b5efa3,},Annotations:map[string]string{io.kubernetes.container.hash: 655385b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88438f20a01e78c805fbb6b1052cba99578042321655fadc14dc1ccbc26f739,PodSandboxId:46bed4f38331bc0e23968447fdf1080cee0c348a4dba726b325f8614745a710a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733359251151008194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185e5b1ff83e228bd547db205a683754,},Anno
tations:map[string]string{io.kubernetes.container.hash: 657a8f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6853e8f40e791baa7ec1bcb1a3c2d9bf26a96ff3e01b8898f2566bc098de4250,PodSandboxId:114aeaa402d0ff68761c101724aeda942f085e61f483246d49eb750a2760feb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733359251163282923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a87603b2e10252eef9676
8a333a5c0,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffea7532293431c0b04ed72c317cda12ce6767d925701e1b25ad6b8fe34aaae3,PodSandboxId:e2deb18ec8506e39ba514991f6751fca1afd562e81e75b3069395b6d554f39df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733359251107058774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2622cd95563fc448796765d42e57e61a,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a3b59e66a2089d623a357333ea18224dcc0c65402fbd752538ec2233ef5af4,PodSandboxId:ff502a14987e99ef5cc0b8c4ac1d46bc5474cfad6a85ebff4be5e5cb2c117ef6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733359251078258568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c32bff294fa7dc0f491d09aa87c14f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3e56e4f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c4429c9-b376-4d60-ae0b-847f90673e38 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.861998306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09cce5dd-f38f-49b0-9945-e0a0766daa12 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.862082291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09cce5dd-f38f-49b0-9945-e0a0766daa12 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.863059816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9ae969b-8210-4ac9-9c86-aeca95122a7b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.863727203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733359269863705110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9ae969b-8210-4ac9-9c86-aeca95122a7b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.864224607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5c8e804-783e-49cd-9b98-5e2a19e00328 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.864290725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5c8e804-783e-49cd-9b98-5e2a19e00328 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.864614513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11d818ed3fd09c48f10110ed4e8d04bc78de0853fbd46ffe8e395e5d18bb5372,PodSandboxId:af39fd09c31b61c2cc66e7657115f9fd52d6ae2cc799f6369f33ac107c4a07d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733359264658821897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-ddbf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1fdd0d-1c2f-437f-a567-7da9a53e391e,},Annotations:map[string]string{io.kubernetes.container.hash: 658dc5ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fec63252ca3e43e890671eeb30bc31ed1a62023d4c77bc037419f9612b55c8d,PodSandboxId:18727d22f5dbf164ae2465f9812709e3dae0637016a76745a2329d6be006229b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733359257394258591,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6777cfb-a832-47bd-8706-4ac341ba8b79,},Annotations:map[string]string{io.kubernetes.container.hash: e30252c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc73d3bd5b1f3fd30a0aad668db73c20c55fbcf909b57f33dd75841840cf934,PodSandboxId:f42ad339fa074b75befbbf9381ef4c264f04994a82627ba07a6747f5e85b5df3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733359257131933127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zrk58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33
6db2a7-3df1-430f-97ca-9fada4b5efa3,},Annotations:map[string]string{io.kubernetes.container.hash: 655385b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88438f20a01e78c805fbb6b1052cba99578042321655fadc14dc1ccbc26f739,PodSandboxId:46bed4f38331bc0e23968447fdf1080cee0c348a4dba726b325f8614745a710a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733359251151008194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185e5b1ff83e228bd547db205a683754,},Anno
tations:map[string]string{io.kubernetes.container.hash: 657a8f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6853e8f40e791baa7ec1bcb1a3c2d9bf26a96ff3e01b8898f2566bc098de4250,PodSandboxId:114aeaa402d0ff68761c101724aeda942f085e61f483246d49eb750a2760feb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733359251163282923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a87603b2e10252eef9676
8a333a5c0,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffea7532293431c0b04ed72c317cda12ce6767d925701e1b25ad6b8fe34aaae3,PodSandboxId:e2deb18ec8506e39ba514991f6751fca1afd562e81e75b3069395b6d554f39df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733359251107058774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2622cd95563fc448796765d42e57e61a,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a3b59e66a2089d623a357333ea18224dcc0c65402fbd752538ec2233ef5af4,PodSandboxId:ff502a14987e99ef5cc0b8c4ac1d46bc5474cfad6a85ebff4be5e5cb2c117ef6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733359251078258568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c32bff294fa7dc0f491d09aa87c14f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3e56e4f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5c8e804-783e-49cd-9b98-5e2a19e00328 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.897806447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2af0dd9-7c06-41f6-95e3-21c89ec60221 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.897892393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2af0dd9-7c06-41f6-95e3-21c89ec60221 name=/runtime.v1.RuntimeService/Version
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.899193864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ba31242-224d-49c1-96f6-e01256eb412f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.899681614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733359269899661578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ba31242-224d-49c1-96f6-e01256eb412f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.900204682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10d3a0d0-15f1-43af-b409-a398ec8382a7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.900293423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10d3a0d0-15f1-43af-b409-a398ec8382a7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 00:41:09 test-preload-792543 crio[657]: time="2024-12-05 00:41:09.900482141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11d818ed3fd09c48f10110ed4e8d04bc78de0853fbd46ffe8e395e5d18bb5372,PodSandboxId:af39fd09c31b61c2cc66e7657115f9fd52d6ae2cc799f6369f33ac107c4a07d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733359264658821897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-ddbf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1fdd0d-1c2f-437f-a567-7da9a53e391e,},Annotations:map[string]string{io.kubernetes.container.hash: 658dc5ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fec63252ca3e43e890671eeb30bc31ed1a62023d4c77bc037419f9612b55c8d,PodSandboxId:18727d22f5dbf164ae2465f9812709e3dae0637016a76745a2329d6be006229b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733359257394258591,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e6777cfb-a832-47bd-8706-4ac341ba8b79,},Annotations:map[string]string{io.kubernetes.container.hash: e30252c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc73d3bd5b1f3fd30a0aad668db73c20c55fbcf909b57f33dd75841840cf934,PodSandboxId:f42ad339fa074b75befbbf9381ef4c264f04994a82627ba07a6747f5e85b5df3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733359257131933127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zrk58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33
6db2a7-3df1-430f-97ca-9fada4b5efa3,},Annotations:map[string]string{io.kubernetes.container.hash: 655385b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88438f20a01e78c805fbb6b1052cba99578042321655fadc14dc1ccbc26f739,PodSandboxId:46bed4f38331bc0e23968447fdf1080cee0c348a4dba726b325f8614745a710a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733359251151008194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185e5b1ff83e228bd547db205a683754,},Anno
tations:map[string]string{io.kubernetes.container.hash: 657a8f69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6853e8f40e791baa7ec1bcb1a3c2d9bf26a96ff3e01b8898f2566bc098de4250,PodSandboxId:114aeaa402d0ff68761c101724aeda942f085e61f483246d49eb750a2760feb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733359251163282923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a87603b2e10252eef9676
8a333a5c0,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffea7532293431c0b04ed72c317cda12ce6767d925701e1b25ad6b8fe34aaae3,PodSandboxId:e2deb18ec8506e39ba514991f6751fca1afd562e81e75b3069395b6d554f39df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733359251107058774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2622cd95563fc448796765d42e57e61a,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71a3b59e66a2089d623a357333ea18224dcc0c65402fbd752538ec2233ef5af4,PodSandboxId:ff502a14987e99ef5cc0b8c4ac1d46bc5474cfad6a85ebff4be5e5cb2c117ef6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733359251078258568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-792543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c32bff294fa7dc0f491d09aa87c14f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3e56e4f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10d3a0d0-15f1-43af-b409-a398ec8382a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11d818ed3fd09       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   af39fd09c31b6       coredns-6d4b75cb6d-ddbf9
	6fec63252ca3e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   18727d22f5dbf       storage-provisioner
	abc73d3bd5b1f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   f42ad339fa074       kube-proxy-zrk58
	6853e8f40e791       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   114aeaa402d0f       kube-controller-manager-test-preload-792543
	f88438f20a01e       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   46bed4f38331b       etcd-test-preload-792543
	ffea753229343       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   e2deb18ec8506       kube-scheduler-test-preload-792543
	71a3b59e66a20       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   ff502a14987e9       kube-apiserver-test-preload-792543
	
	
	==> coredns [11d818ed3fd09c48f10110ed4e8d04bc78de0853fbd46ffe8e395e5d18bb5372] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:52639 - 21119 "HINFO IN 2110174942659569808.6973152795193215997. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025555348s
	
	
	==> describe nodes <==
	Name:               test-preload-792543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-792543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=test-preload-792543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T00_39_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 00:39:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-792543
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:41:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 00:41:06 +0000   Thu, 05 Dec 2024 00:39:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 00:41:06 +0000   Thu, 05 Dec 2024 00:39:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 00:41:06 +0000   Thu, 05 Dec 2024 00:39:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 00:41:06 +0000   Thu, 05 Dec 2024 00:41:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    test-preload-792543
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4e9ec04b5b744a1a92f5740952bc2f2
	  System UUID:                c4e9ec04-b5b7-44a1-a92f-5740952bc2f2
	  Boot ID:                    34c9f1f0-a3e2-4769-82dd-bd24c0fde2b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-ddbf9                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     83s
	  kube-system                 etcd-test-preload-792543                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         95s
	  kube-system                 kube-apiserver-test-preload-792543             250m (12%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-test-preload-792543    200m (10%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-zrk58                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-test-preload-792543             100m (5%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node test-preload-792543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node test-preload-792543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node test-preload-792543 status is now: NodeHasSufficientPID
	  Normal  NodeReady                85s                kubelet          Node test-preload-792543 status is now: NodeReady
	  Normal  RegisteredNode           83s                node-controller  Node test-preload-792543 event: Registered Node test-preload-792543 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-792543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-792543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-792543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node test-preload-792543 event: Registered Node test-preload-792543 in Controller
	
	
	==> dmesg <==
	[Dec 5 00:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052388] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037829] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.818241] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.004830] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.541314] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.054725] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058711] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.180437] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.130269] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.272308] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[ +12.817308] systemd-fstab-generator[981]: Ignoring "noauto" option for root device
	[  +0.055889] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.590705] systemd-fstab-generator[1109]: Ignoring "noauto" option for root device
	[  +5.602052] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.545678] systemd-fstab-generator[1719]: Ignoring "noauto" option for root device
	[Dec 5 00:41] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [f88438f20a01e78c805fbb6b1052cba99578042321655fadc14dc1ccbc26f739] <==
	{"level":"info","ts":"2024-12-05T00:40:51.590Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b3a0188682bd7022","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-05T00:40:51.604Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-05T00:40:51.605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 switched to configuration voters=(12943372295060942882)"}
	{"level":"info","ts":"2024-12-05T00:40:51.605Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6df60d153d3d688","local-member-id":"b3a0188682bd7022","added-peer-id":"b3a0188682bd7022","added-peer-peer-urls":["https://192.168.39.177:2380"]}
	{"level":"info","ts":"2024-12-05T00:40:51.606Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6df60d153d3d688","local-member-id":"b3a0188682bd7022","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T00:40:51.606Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T00:40:51.622Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.177:2380"}
	{"level":"info","ts":"2024-12-05T00:40:51.622Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.177:2380"}
	{"level":"info","ts":"2024-12-05T00:40:51.622Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T00:40:51.626Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b3a0188682bd7022","initial-advertise-peer-urls":["https://192.168.39.177:2380"],"listen-peer-urls":["https://192.168.39.177:2380"],"advertise-client-urls":["https://192.168.39.177:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.177:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T00:40:51.628Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T00:40:53.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-05T00:40:53.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-05T00:40:53.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 received MsgPreVoteResp from b3a0188682bd7022 at term 2"}
	{"level":"info","ts":"2024-12-05T00:40:53.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T00:40:53.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 received MsgVoteResp from b3a0188682bd7022 at term 3"}
	{"level":"info","ts":"2024-12-05T00:40:53.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 became leader at term 3"}
	{"level":"info","ts":"2024-12-05T00:40:53.154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b3a0188682bd7022 elected leader b3a0188682bd7022 at term 3"}
	{"level":"info","ts":"2024-12-05T00:40:53.159Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b3a0188682bd7022","local-member-attributes":"{Name:test-preload-792543 ClientURLs:[https://192.168.39.177:2379]}","request-path":"/0/members/b3a0188682bd7022/attributes","cluster-id":"e6df60d153d3d688","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T00:40:53.159Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T00:40:53.160Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T00:40:53.160Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T00:40:53.162Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.177:2379"}
	{"level":"info","ts":"2024-12-05T00:40:53.166Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T00:40:53.167Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:41:10 up 0 min,  0 users,  load average: 1.19, 0.29, 0.09
	Linux test-preload-792543 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [71a3b59e66a2089d623a357333ea18224dcc0c65402fbd752538ec2233ef5af4] <==
	I1205 00:40:55.506984       1 naming_controller.go:291] Starting NamingConditionController
	I1205 00:40:55.507084       1 establishing_controller.go:76] Starting EstablishingController
	I1205 00:40:55.507349       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1205 00:40:55.507417       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1205 00:40:55.507467       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1205 00:40:55.521069       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1205 00:40:55.521159       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1205 00:40:55.599811       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1205 00:40:55.622630       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1205 00:40:55.654618       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 00:40:55.654735       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 00:40:55.656798       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1205 00:40:55.663052       1 cache.go:39] Caches are synced for autoregister controller
	I1205 00:40:55.663812       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1205 00:40:55.684420       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 00:40:56.135481       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1205 00:40:56.465019       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 00:40:57.163858       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1205 00:40:57.177616       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1205 00:40:57.229650       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1205 00:40:57.257770       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 00:40:57.263129       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 00:40:57.509570       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1205 00:41:08.543894       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 00:41:08.643869       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6853e8f40e791baa7ec1bcb1a3c2d9bf26a96ff3e01b8898f2566bc098de4250] <==
	I1205 00:41:08.331386       1 shared_informer.go:262] Caches are synced for service account
	I1205 00:41:08.331539       1 shared_informer.go:262] Caches are synced for endpoint
	I1205 00:41:08.335691       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1205 00:41:08.338205       1 shared_informer.go:262] Caches are synced for TTL
	I1205 00:41:08.341573       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1205 00:41:08.342884       1 shared_informer.go:262] Caches are synced for attach detach
	I1205 00:41:08.345206       1 shared_informer.go:262] Caches are synced for ephemeral
	I1205 00:41:08.347061       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1205 00:41:08.349230       1 shared_informer.go:262] Caches are synced for GC
	I1205 00:41:08.352755       1 shared_informer.go:262] Caches are synced for taint
	I1205 00:41:08.352853       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I1205 00:41:08.352934       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W1205 00:41:08.352970       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-792543. Assuming now as a timestamp.
	I1205 00:41:08.354830       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1205 00:41:08.355141       1 shared_informer.go:262] Caches are synced for expand
	I1205 00:41:08.353250       1 event.go:294] "Event occurred" object="test-preload-792543" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-792543 event: Registered Node test-preload-792543 in Controller"
	I1205 00:41:08.368026       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1205 00:41:08.378067       1 shared_informer.go:262] Caches are synced for disruption
	I1205 00:41:08.378097       1 disruption.go:371] Sending events to api server.
	I1205 00:41:08.492581       1 shared_informer.go:262] Caches are synced for HPA
	I1205 00:41:08.547702       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 00:41:08.552748       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 00:41:08.968599       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 00:41:09.045178       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 00:41:09.045226       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [abc73d3bd5b1f3fd30a0aad668db73c20c55fbcf909b57f33dd75841840cf934] <==
	I1205 00:40:57.450212       1 node.go:163] Successfully retrieved node IP: 192.168.39.177
	I1205 00:40:57.450586       1 server_others.go:138] "Detected node IP" address="192.168.39.177"
	I1205 00:40:57.450939       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1205 00:40:57.497668       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1205 00:40:57.497695       1 server_others.go:206] "Using iptables Proxier"
	I1205 00:40:57.497739       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1205 00:40:57.498014       1 server.go:661] "Version info" version="v1.24.4"
	I1205 00:40:57.498036       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 00:40:57.500693       1 config.go:317] "Starting service config controller"
	I1205 00:40:57.500735       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1205 00:40:57.500816       1 config.go:226] "Starting endpoint slice config controller"
	I1205 00:40:57.500839       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1205 00:40:57.504584       1 config.go:444] "Starting node config controller"
	I1205 00:40:57.504612       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1205 00:40:57.601570       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1205 00:40:57.601631       1 shared_informer.go:262] Caches are synced for service config
	I1205 00:40:57.604824       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [ffea7532293431c0b04ed72c317cda12ce6767d925701e1b25ad6b8fe34aaae3] <==
	I1205 00:40:51.699771       1 serving.go:348] Generated self-signed cert in-memory
	W1205 00:40:55.541099       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 00:40:55.542016       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 00:40:55.542080       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 00:40:55.542106       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 00:40:55.600075       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1205 00:40:55.600145       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 00:40:55.604575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 00:40:55.604625       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 00:40:55.605399       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1205 00:40:55.606482       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 00:40:55.705597       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 00:40:55 test-preload-792543 kubelet[1116]: I1205 00:40:55.644753    1116 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-792543"
	Dec 05 00:40:55 test-preload-792543 kubelet[1116]: I1205 00:40:55.647240    1116 setters.go:532] "Node became not ready" node="test-preload-792543" condition={Type:Ready Status:False LastHeartbeatTime:2024-12-05 00:40:55.647171185 +0000 UTC m=+5.386690966 LastTransitionTime:2024-12-05 00:40:55.647171185 +0000 UTC m=+5.386690966 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.390266    1116 apiserver.go:52] "Watching apiserver"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.394879    1116 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.394995    1116 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.395050    1116 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: E1205 00:40:56.395662    1116 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-ddbf9" podUID=7e1fdd0d-1c2f-437f-a567-7da9a53e391e
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.457862    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e6777cfb-a832-47bd-8706-4ac341ba8b79-tmp\") pod \"storage-provisioner\" (UID: \"e6777cfb-a832-47bd-8706-4ac341ba8b79\") " pod="kube-system/storage-provisioner"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.458069    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume\") pod \"coredns-6d4b75cb6d-ddbf9\" (UID: \"7e1fdd0d-1c2f-437f-a567-7da9a53e391e\") " pod="kube-system/coredns-6d4b75cb6d-ddbf9"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.458232    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/336db2a7-3df1-430f-97ca-9fada4b5efa3-kube-proxy\") pod \"kube-proxy-zrk58\" (UID: \"336db2a7-3df1-430f-97ca-9fada4b5efa3\") " pod="kube-system/kube-proxy-zrk58"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.458262    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7n94\" (UniqueName: \"kubernetes.io/projected/336db2a7-3df1-430f-97ca-9fada4b5efa3-kube-api-access-q7n94\") pod \"kube-proxy-zrk58\" (UID: \"336db2a7-3df1-430f-97ca-9fada4b5efa3\") " pod="kube-system/kube-proxy-zrk58"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.458357    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm8jd\" (UniqueName: \"kubernetes.io/projected/e6777cfb-a832-47bd-8706-4ac341ba8b79-kube-api-access-dm8jd\") pod \"storage-provisioner\" (UID: \"e6777cfb-a832-47bd-8706-4ac341ba8b79\") " pod="kube-system/storage-provisioner"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.458444    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/336db2a7-3df1-430f-97ca-9fada4b5efa3-xtables-lock\") pod \"kube-proxy-zrk58\" (UID: \"336db2a7-3df1-430f-97ca-9fada4b5efa3\") " pod="kube-system/kube-proxy-zrk58"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.458595    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6rj\" (UniqueName: \"kubernetes.io/projected/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-kube-api-access-lq6rj\") pod \"coredns-6d4b75cb6d-ddbf9\" (UID: \"7e1fdd0d-1c2f-437f-a567-7da9a53e391e\") " pod="kube-system/coredns-6d4b75cb6d-ddbf9"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.458700    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/336db2a7-3df1-430f-97ca-9fada4b5efa3-lib-modules\") pod \"kube-proxy-zrk58\" (UID: \"336db2a7-3df1-430f-97ca-9fada4b5efa3\") " pod="kube-system/kube-proxy-zrk58"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: I1205 00:40:56.458808    1116 reconciler.go:159] "Reconciler: start to sync state"
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: E1205 00:40:56.563881    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 00:40:56 test-preload-792543 kubelet[1116]: E1205 00:40:56.564014    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume podName:7e1fdd0d-1c2f-437f-a567-7da9a53e391e nodeName:}" failed. No retries permitted until 2024-12-05 00:40:57.063982876 +0000 UTC m=+6.803502669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume") pod "coredns-6d4b75cb6d-ddbf9" (UID: "7e1fdd0d-1c2f-437f-a567-7da9a53e391e") : object "kube-system"/"coredns" not registered
	Dec 05 00:40:57 test-preload-792543 kubelet[1116]: E1205 00:40:57.068341    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 00:40:57 test-preload-792543 kubelet[1116]: E1205 00:40:57.068429    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume podName:7e1fdd0d-1c2f-437f-a567-7da9a53e391e nodeName:}" failed. No retries permitted until 2024-12-05 00:40:58.068413732 +0000 UTC m=+7.807933515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume") pod "coredns-6d4b75cb6d-ddbf9" (UID: "7e1fdd0d-1c2f-437f-a567-7da9a53e391e") : object "kube-system"/"coredns" not registered
	Dec 05 00:40:58 test-preload-792543 kubelet[1116]: E1205 00:40:58.074677    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 00:40:58 test-preload-792543 kubelet[1116]: E1205 00:40:58.074787    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume podName:7e1fdd0d-1c2f-437f-a567-7da9a53e391e nodeName:}" failed. No retries permitted until 2024-12-05 00:41:00.074769369 +0000 UTC m=+9.814289151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume") pod "coredns-6d4b75cb6d-ddbf9" (UID: "7e1fdd0d-1c2f-437f-a567-7da9a53e391e") : object "kube-system"/"coredns" not registered
	Dec 05 00:40:58 test-preload-792543 kubelet[1116]: E1205 00:40:58.495685    1116 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-ddbf9" podUID=7e1fdd0d-1c2f-437f-a567-7da9a53e391e
	Dec 05 00:41:00 test-preload-792543 kubelet[1116]: E1205 00:41:00.091303    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 00:41:00 test-preload-792543 kubelet[1116]: E1205 00:41:00.091850    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume podName:7e1fdd0d-1c2f-437f-a567-7da9a53e391e nodeName:}" failed. No retries permitted until 2024-12-05 00:41:04.091818964 +0000 UTC m=+13.831338746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e1fdd0d-1c2f-437f-a567-7da9a53e391e-config-volume") pod "coredns-6d4b75cb6d-ddbf9" (UID: "7e1fdd0d-1c2f-437f-a567-7da9a53e391e") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [6fec63252ca3e43e890671eeb30bc31ed1a62023d4c77bc037419f9612b55c8d] <==
	I1205 00:40:57.503630       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-792543 -n test-preload-792543
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-792543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-792543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-792543
--- FAIL: TestPreload (170.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (502.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.206992826s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-037038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-037038" primary control-plane node in "kubernetes-upgrade-037038" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 00:43:05.028206   60306 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:43:05.028632   60306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:43:05.028643   60306 out.go:358] Setting ErrFile to fd 2...
	I1205 00:43:05.028648   60306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:43:05.028845   60306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:43:05.030096   60306 out.go:352] Setting JSON to false
	I1205 00:43:05.031142   60306 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8731,"bootTime":1733350654,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 00:43:05.031211   60306 start.go:139] virtualization: kvm guest
	I1205 00:43:05.032883   60306 out.go:177] * [kubernetes-upgrade-037038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 00:43:05.034727   60306 notify.go:220] Checking for updates...
	I1205 00:43:05.035418   60306 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:43:05.036615   60306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:43:05.038908   60306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:43:05.041969   60306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:43:05.043564   60306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 00:43:05.045552   60306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:43:05.047096   60306 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:43:05.084804   60306 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 00:43:05.086026   60306 start.go:297] selected driver: kvm2
	I1205 00:43:05.086045   60306 start.go:901] validating driver "kvm2" against <nil>
	I1205 00:43:05.086061   60306 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:43:05.087031   60306 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:43:05.087113   60306 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 00:43:05.104923   60306 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 00:43:05.104990   60306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 00:43:05.105319   60306 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 00:43:05.105352   60306 cni.go:84] Creating CNI manager for ""
	I1205 00:43:05.105413   60306 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 00:43:05.105426   60306 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 00:43:05.105496   60306 start.go:340] cluster config:
	{Name:kubernetes-upgrade-037038 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-037038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:43:05.105623   60306 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:43:05.107471   60306 out.go:177] * Starting "kubernetes-upgrade-037038" primary control-plane node in "kubernetes-upgrade-037038" cluster
	I1205 00:43:05.108350   60306 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 00:43:05.108396   60306 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 00:43:05.108413   60306 cache.go:56] Caching tarball of preloaded images
	I1205 00:43:05.108528   60306 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 00:43:05.108543   60306 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 00:43:05.108961   60306 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/config.json ...
	I1205 00:43:05.108992   60306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/config.json: {Name:mka78cc0f5511cb316c2ac7d58260f6a718929e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:43:05.109163   60306 start.go:360] acquireMachinesLock for kubernetes-upgrade-037038: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 00:43:05.109202   60306 start.go:364] duration metric: took 20.325µs to acquireMachinesLock for "kubernetes-upgrade-037038"
	I1205 00:43:05.109224   60306 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-037038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-037038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 00:43:05.109294   60306 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 00:43:05.110869   60306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 00:43:05.110995   60306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:43:05.111048   60306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:43:05.126754   60306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I1205 00:43:05.127209   60306 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:43:05.127717   60306 main.go:141] libmachine: Using API Version  1
	I1205 00:43:05.127739   60306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:43:05.128139   60306 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:43:05.128338   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetMachineName
	I1205 00:43:05.128488   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .DriverName
	I1205 00:43:05.128678   60306 start.go:159] libmachine.API.Create for "kubernetes-upgrade-037038" (driver="kvm2")
	I1205 00:43:05.128707   60306 client.go:168] LocalClient.Create starting
	I1205 00:43:05.128743   60306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem
	I1205 00:43:05.128778   60306 main.go:141] libmachine: Decoding PEM data...
	I1205 00:43:05.128801   60306 main.go:141] libmachine: Parsing certificate...
	I1205 00:43:05.128866   60306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem
	I1205 00:43:05.128890   60306 main.go:141] libmachine: Decoding PEM data...
	I1205 00:43:05.128906   60306 main.go:141] libmachine: Parsing certificate...
	I1205 00:43:05.128940   60306 main.go:141] libmachine: Running pre-create checks...
	I1205 00:43:05.128955   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .PreCreateCheck
	I1205 00:43:05.129342   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetConfigRaw
	I1205 00:43:05.129779   60306 main.go:141] libmachine: Creating machine...
	I1205 00:43:05.129792   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .Create
	I1205 00:43:05.129954   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Creating KVM machine...
	I1205 00:43:05.131202   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found existing default KVM network
	I1205 00:43:05.131811   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:05.131664   60368 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151c0}
	I1205 00:43:05.131837   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | created network xml: 
	I1205 00:43:05.131849   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | <network>
	I1205 00:43:05.131858   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |   <name>mk-kubernetes-upgrade-037038</name>
	I1205 00:43:05.131868   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |   <dns enable='no'/>
	I1205 00:43:05.131878   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |   
	I1205 00:43:05.131889   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 00:43:05.131899   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |     <dhcp>
	I1205 00:43:05.131909   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 00:43:05.131924   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |     </dhcp>
	I1205 00:43:05.131961   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |   </ip>
	I1205 00:43:05.132011   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG |   
	I1205 00:43:05.132043   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | </network>
	I1205 00:43:05.132056   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | 
	I1205 00:43:05.137095   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | trying to create private KVM network mk-kubernetes-upgrade-037038 192.168.39.0/24...
	I1205 00:43:05.206160   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | private KVM network mk-kubernetes-upgrade-037038 192.168.39.0/24 created
	I1205 00:43:05.206204   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:05.206130   60368 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:43:05.206217   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Setting up store path in /home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038 ...
	I1205 00:43:05.206232   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Building disk image from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 00:43:05.206441   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Downloading /home/jenkins/minikube-integration/20045-14126/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 00:43:05.460967   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:05.460829   60368 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/id_rsa...
	I1205 00:43:05.720961   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:05.720832   60368 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/kubernetes-upgrade-037038.rawdisk...
	I1205 00:43:05.720992   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Writing magic tar header
	I1205 00:43:05.721019   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Writing SSH key tar header
	I1205 00:43:05.721033   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:05.720961   60368 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038 ...
	I1205 00:43:05.721056   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038
	I1205 00:43:05.721134   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038 (perms=drwx------)
	I1205 00:43:05.721164   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube/machines (perms=drwxr-xr-x)
	I1205 00:43:05.721177   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube/machines
	I1205 00:43:05.721203   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:43:05.721216   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20045-14126
	I1205 00:43:05.721225   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 00:43:05.721231   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Checking permissions on dir: /home/jenkins
	I1205 00:43:05.721273   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126/.minikube (perms=drwxr-xr-x)
	I1205 00:43:05.721294   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Checking permissions on dir: /home
	I1205 00:43:05.721307   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Setting executable bit set on /home/jenkins/minikube-integration/20045-14126 (perms=drwxrwxr-x)
	I1205 00:43:05.721320   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 00:43:05.721334   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 00:43:05.721344   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Skipping /home - not owner
	I1205 00:43:05.721354   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Creating domain...
	I1205 00:43:05.722469   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) define libvirt domain using xml: 
	I1205 00:43:05.722489   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) <domain type='kvm'>
	I1205 00:43:05.722499   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   <name>kubernetes-upgrade-037038</name>
	I1205 00:43:05.722507   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   <memory unit='MiB'>2200</memory>
	I1205 00:43:05.722516   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   <vcpu>2</vcpu>
	I1205 00:43:05.722527   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   <features>
	I1205 00:43:05.722540   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <acpi/>
	I1205 00:43:05.722551   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <apic/>
	I1205 00:43:05.722565   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <pae/>
	I1205 00:43:05.722586   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     
	I1205 00:43:05.722599   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   </features>
	I1205 00:43:05.722610   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   <cpu mode='host-passthrough'>
	I1205 00:43:05.722619   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   
	I1205 00:43:05.722636   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   </cpu>
	I1205 00:43:05.722648   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   <os>
	I1205 00:43:05.722660   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <type>hvm</type>
	I1205 00:43:05.722674   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <boot dev='cdrom'/>
	I1205 00:43:05.722685   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <boot dev='hd'/>
	I1205 00:43:05.722698   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <bootmenu enable='no'/>
	I1205 00:43:05.722713   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   </os>
	I1205 00:43:05.722726   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   <devices>
	I1205 00:43:05.722738   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <disk type='file' device='cdrom'>
	I1205 00:43:05.722757   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/boot2docker.iso'/>
	I1205 00:43:05.722770   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <target dev='hdc' bus='scsi'/>
	I1205 00:43:05.722783   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <readonly/>
	I1205 00:43:05.722794   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     </disk>
	I1205 00:43:05.722809   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <disk type='file' device='disk'>
	I1205 00:43:05.722823   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 00:43:05.722840   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <source file='/home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/kubernetes-upgrade-037038.rawdisk'/>
	I1205 00:43:05.722855   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <target dev='hda' bus='virtio'/>
	I1205 00:43:05.722862   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     </disk>
	I1205 00:43:05.722887   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <interface type='network'>
	I1205 00:43:05.722898   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <source network='mk-kubernetes-upgrade-037038'/>
	I1205 00:43:05.722906   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <model type='virtio'/>
	I1205 00:43:05.722913   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     </interface>
	I1205 00:43:05.722932   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <interface type='network'>
	I1205 00:43:05.722962   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <source network='default'/>
	I1205 00:43:05.722986   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <model type='virtio'/>
	I1205 00:43:05.723002   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     </interface>
	I1205 00:43:05.723024   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <serial type='pty'>
	I1205 00:43:05.723038   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <target port='0'/>
	I1205 00:43:05.723045   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     </serial>
	I1205 00:43:05.723068   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <console type='pty'>
	I1205 00:43:05.723082   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <target type='serial' port='0'/>
	I1205 00:43:05.723088   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     </console>
	I1205 00:43:05.723096   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     <rng model='virtio'>
	I1205 00:43:05.723102   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)       <backend model='random'>/dev/random</backend>
	I1205 00:43:05.723106   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     </rng>
	I1205 00:43:05.723111   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     
	I1205 00:43:05.723117   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)     
	I1205 00:43:05.723122   60306 main.go:141] libmachine: (kubernetes-upgrade-037038)   </devices>
	I1205 00:43:05.723128   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) </domain>
	I1205 00:43:05.723135   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) 
	I1205 00:43:05.727597   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:00:65:09 in network default
	I1205 00:43:05.728165   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Ensuring networks are active...
	I1205 00:43:05.728183   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:05.728934   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Ensuring network default is active
	I1205 00:43:05.729301   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Ensuring network mk-kubernetes-upgrade-037038 is active
	I1205 00:43:05.729917   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Getting domain xml...
	I1205 00:43:05.730776   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Creating domain...
	I1205 00:43:07.004491   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Waiting to get IP...
	I1205 00:43:07.005218   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:07.005556   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:07.005576   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:07.005539   60368 retry.go:31] will retry after 234.933842ms: waiting for machine to come up
	I1205 00:43:07.242180   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:07.242606   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:07.242632   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:07.242571   60368 retry.go:31] will retry after 242.901754ms: waiting for machine to come up
	I1205 00:43:07.486842   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:07.487131   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:07.487157   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:07.487093   60368 retry.go:31] will retry after 452.770561ms: waiting for machine to come up
	I1205 00:43:07.941631   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:07.942105   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:07.942122   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:07.942080   60368 retry.go:31] will retry after 435.37967ms: waiting for machine to come up
	I1205 00:43:08.378595   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:08.379028   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:08.379059   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:08.378966   60368 retry.go:31] will retry after 564.250421ms: waiting for machine to come up
	I1205 00:43:08.944616   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:08.944970   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:08.945001   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:08.944918   60368 retry.go:31] will retry after 711.359934ms: waiting for machine to come up
	I1205 00:43:09.657620   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:09.658027   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:09.658076   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:09.657984   60368 retry.go:31] will retry after 1.064137863s: waiting for machine to come up
	I1205 00:43:10.724259   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:10.724675   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:10.724696   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:10.724591   60368 retry.go:31] will retry after 1.010344401s: waiting for machine to come up
	I1205 00:43:11.736158   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:11.736529   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:11.736556   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:11.736478   60368 retry.go:31] will retry after 1.518694302s: waiting for machine to come up
	I1205 00:43:13.257185   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:13.257682   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:13.257710   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:13.257633   60368 retry.go:31] will retry after 1.969543628s: waiting for machine to come up
	I1205 00:43:15.228511   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:15.228968   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:15.229011   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:15.228937   60368 retry.go:31] will retry after 1.949857146s: waiting for machine to come up
	I1205 00:43:17.179780   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:17.180289   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:17.180319   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:17.180234   60368 retry.go:31] will retry after 2.550116345s: waiting for machine to come up
	I1205 00:43:19.734253   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:19.734647   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:19.734672   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:19.734601   60368 retry.go:31] will retry after 3.039819088s: waiting for machine to come up
	I1205 00:43:22.775986   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:22.776386   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find current IP address of domain kubernetes-upgrade-037038 in network mk-kubernetes-upgrade-037038
	I1205 00:43:22.776407   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | I1205 00:43:22.776346   60368 retry.go:31] will retry after 4.573748377s: waiting for machine to come up
	I1205 00:43:27.352038   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.352460   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has current primary IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.352480   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Found IP for machine: 192.168.39.136
	I1205 00:43:27.352495   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Reserving static IP address...
	I1205 00:43:27.352910   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-037038", mac: "52:54:00:48:02:5e", ip: "192.168.39.136"} in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.430229   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Getting to WaitForSSH function...
	I1205 00:43:27.430256   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Reserved static IP address: 192.168.39.136
	I1205 00:43:27.430271   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Waiting for SSH to be available...
	I1205 00:43:27.433594   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.434387   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:27.434425   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.434444   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Using SSH client type: external
	I1205 00:43:27.434460   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Using SSH private key: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/id_rsa (-rw-------)
	I1205 00:43:27.434530   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 00:43:27.434552   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | About to run SSH command:
	I1205 00:43:27.434565   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | exit 0
	I1205 00:43:27.561799   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | SSH cmd err, output: <nil>: 
	I1205 00:43:27.562149   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) KVM machine creation complete!
	I1205 00:43:27.562442   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetConfigRaw
	I1205 00:43:27.563117   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .DriverName
	I1205 00:43:27.563323   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .DriverName
	I1205 00:43:27.563495   60306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 00:43:27.563512   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetState
	I1205 00:43:27.565058   60306 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 00:43:27.565076   60306 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 00:43:27.565082   60306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 00:43:27.565087   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:27.567268   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.567622   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:27.567670   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.567782   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:27.567983   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:27.568146   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:27.568296   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:27.568464   60306 main.go:141] libmachine: Using SSH client type: native
	I1205 00:43:27.568644   60306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1205 00:43:27.568653   60306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 00:43:27.685023   60306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 00:43:27.685051   60306 main.go:141] libmachine: Detecting the provisioner...
	I1205 00:43:27.685063   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:27.687919   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.688336   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:27.688371   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.688502   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:27.688711   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:27.688872   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:27.689023   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:27.689189   60306 main.go:141] libmachine: Using SSH client type: native
	I1205 00:43:27.689401   60306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1205 00:43:27.689422   60306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 00:43:27.802587   60306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 00:43:27.802657   60306 main.go:141] libmachine: found compatible host: buildroot
	I1205 00:43:27.802670   60306 main.go:141] libmachine: Provisioning with buildroot...
	I1205 00:43:27.802680   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetMachineName
	I1205 00:43:27.802937   60306 buildroot.go:166] provisioning hostname "kubernetes-upgrade-037038"
	I1205 00:43:27.802990   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetMachineName
	I1205 00:43:27.803167   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:27.805684   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.805993   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:27.806023   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.806171   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:27.806358   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:27.806522   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:27.806672   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:27.806827   60306 main.go:141] libmachine: Using SSH client type: native
	I1205 00:43:27.806989   60306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1205 00:43:27.807001   60306 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-037038 && echo "kubernetes-upgrade-037038" | sudo tee /etc/hostname
	I1205 00:43:27.933001   60306 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-037038
	
	I1205 00:43:27.933037   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:27.935461   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.935808   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:27.935849   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:27.936026   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:27.936199   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:27.936344   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:27.936516   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:27.936674   60306 main.go:141] libmachine: Using SSH client type: native
	I1205 00:43:27.936845   60306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1205 00:43:27.936870   60306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-037038' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-037038/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-037038' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 00:43:28.054171   60306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 00:43:28.054200   60306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20045-14126/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-14126/.minikube}
	I1205 00:43:28.054254   60306 buildroot.go:174] setting up certificates
	I1205 00:43:28.054272   60306 provision.go:84] configureAuth start
	I1205 00:43:28.054289   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetMachineName
	I1205 00:43:28.054529   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetIP
	I1205 00:43:28.057295   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.057660   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.057693   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.057820   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:28.059729   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.060017   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.060047   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.060114   60306 provision.go:143] copyHostCerts
	I1205 00:43:28.060201   60306 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem, removing ...
	I1205 00:43:28.060214   60306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem
	I1205 00:43:28.060292   60306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/ca.pem (1082 bytes)
	I1205 00:43:28.060416   60306 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem, removing ...
	I1205 00:43:28.060426   60306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem
	I1205 00:43:28.060470   60306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/cert.pem (1123 bytes)
	I1205 00:43:28.060568   60306 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem, removing ...
	I1205 00:43:28.060580   60306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem
	I1205 00:43:28.060619   60306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-14126/.minikube/key.pem (1675 bytes)
	I1205 00:43:28.060705   60306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-037038 san=[127.0.0.1 192.168.39.136 kubernetes-upgrade-037038 localhost minikube]
	I1205 00:43:28.213961   60306 provision.go:177] copyRemoteCerts
	I1205 00:43:28.214026   60306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 00:43:28.214052   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:28.216879   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.217252   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.217288   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.217429   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:28.217602   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:28.217743   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:28.217879   60306 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/id_rsa Username:docker}
	I1205 00:43:28.303660   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 00:43:28.326882   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 00:43:28.349740   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 00:43:28.375216   60306 provision.go:87] duration metric: took 320.914943ms to configureAuth
	I1205 00:43:28.375253   60306 buildroot.go:189] setting minikube options for container-runtime
	I1205 00:43:28.375455   60306 config.go:182] Loaded profile config "kubernetes-upgrade-037038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 00:43:28.375539   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:28.378198   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.378567   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.378614   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.378804   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:28.379037   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:28.379185   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:28.379336   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:28.379491   60306 main.go:141] libmachine: Using SSH client type: native
	I1205 00:43:28.379681   60306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1205 00:43:28.379698   60306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 00:43:28.616120   60306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 00:43:28.616164   60306 main.go:141] libmachine: Checking connection to Docker...
	I1205 00:43:28.616173   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetURL
	I1205 00:43:28.617574   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | Using libvirt version 6000000
	I1205 00:43:28.619668   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.619960   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.619995   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.620128   60306 main.go:141] libmachine: Docker is up and running!
	I1205 00:43:28.620143   60306 main.go:141] libmachine: Reticulating splines...
	I1205 00:43:28.620151   60306 client.go:171] duration metric: took 23.491433611s to LocalClient.Create
	I1205 00:43:28.620181   60306 start.go:167] duration metric: took 23.491503814s to libmachine.API.Create "kubernetes-upgrade-037038"
	I1205 00:43:28.620211   60306 start.go:293] postStartSetup for "kubernetes-upgrade-037038" (driver="kvm2")
	I1205 00:43:28.620223   60306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 00:43:28.620243   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .DriverName
	I1205 00:43:28.620480   60306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 00:43:28.620506   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:28.622597   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.622921   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.622951   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.623110   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:28.623327   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:28.623499   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:28.623663   60306 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/id_rsa Username:docker}
	I1205 00:43:28.708386   60306 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 00:43:28.712605   60306 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 00:43:28.712633   60306 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/addons for local assets ...
	I1205 00:43:28.712703   60306 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-14126/.minikube/files for local assets ...
	I1205 00:43:28.712819   60306 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem -> 213852.pem in /etc/ssl/certs
	I1205 00:43:28.712944   60306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 00:43:28.722325   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:43:28.746583   60306 start.go:296] duration metric: took 126.350228ms for postStartSetup
	I1205 00:43:28.746684   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetConfigRaw
	I1205 00:43:28.747321   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetIP
	I1205 00:43:28.750366   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.750807   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.750835   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.751130   60306 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/config.json ...
	I1205 00:43:28.751465   60306 start.go:128] duration metric: took 23.642156674s to createHost
	I1205 00:43:28.751504   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:28.753959   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.754305   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.754367   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.754457   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:28.754646   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:28.754792   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:28.754931   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:28.755103   60306 main.go:141] libmachine: Using SSH client type: native
	I1205 00:43:28.755263   60306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1205 00:43:28.755287   60306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 00:43:28.866545   60306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733359408.842305403
	
	I1205 00:43:28.866570   60306 fix.go:216] guest clock: 1733359408.842305403
	I1205 00:43:28.866578   60306 fix.go:229] Guest: 2024-12-05 00:43:28.842305403 +0000 UTC Remote: 2024-12-05 00:43:28.751485864 +0000 UTC m=+23.775505489 (delta=90.819539ms)
	I1205 00:43:28.866614   60306 fix.go:200] guest clock delta is within tolerance: 90.819539ms
	I1205 00:43:28.866619   60306 start.go:83] releasing machines lock for "kubernetes-upgrade-037038", held for 23.757407044s
	I1205 00:43:28.866642   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .DriverName
	I1205 00:43:28.866920   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetIP
	I1205 00:43:28.869714   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.870109   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.870133   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.870459   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .DriverName
	I1205 00:43:28.871030   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .DriverName
	I1205 00:43:28.871224   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .DriverName
	I1205 00:43:28.871310   60306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 00:43:28.871365   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:28.871460   60306 ssh_runner.go:195] Run: cat /version.json
	I1205 00:43:28.871484   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHHostname
	I1205 00:43:28.874260   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.874551   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.874709   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.874739   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.874882   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:28.875032   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:28.875075   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:28.875149   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:28.875331   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHPort
	I1205 00:43:28.875350   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:28.875475   60306 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/id_rsa Username:docker}
	I1205 00:43:28.875504   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHKeyPath
	I1205 00:43:28.875692   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetSSHUsername
	I1205 00:43:28.875818   60306 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/kubernetes-upgrade-037038/id_rsa Username:docker}
	I1205 00:43:28.990926   60306 ssh_runner.go:195] Run: systemctl --version
	I1205 00:43:28.999463   60306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 00:43:29.159855   60306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 00:43:29.166457   60306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 00:43:29.166542   60306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 00:43:29.183043   60306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 00:43:29.183072   60306 start.go:495] detecting cgroup driver to use...
	I1205 00:43:29.183145   60306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 00:43:29.202873   60306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 00:43:29.216672   60306 docker.go:217] disabling cri-docker service (if available) ...
	I1205 00:43:29.216737   60306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 00:43:29.231001   60306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 00:43:29.244820   60306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 00:43:29.364528   60306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 00:43:29.512198   60306 docker.go:233] disabling docker service ...
	I1205 00:43:29.512273   60306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 00:43:29.529059   60306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 00:43:29.542200   60306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 00:43:29.678624   60306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 00:43:29.790045   60306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 00:43:29.805633   60306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 00:43:29.824927   60306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 00:43:29.824996   60306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:43:29.835381   60306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 00:43:29.835448   60306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:43:29.845680   60306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:43:29.856299   60306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 00:43:29.867052   60306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 00:43:29.878373   60306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 00:43:29.887431   60306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 00:43:29.887495   60306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 00:43:29.903763   60306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 00:43:29.915692   60306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:43:30.033351   60306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 00:43:30.124219   60306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 00:43:30.124291   60306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 00:43:30.129002   60306 start.go:563] Will wait 60s for crictl version
	I1205 00:43:30.129059   60306 ssh_runner.go:195] Run: which crictl
	I1205 00:43:30.132536   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 00:43:30.181557   60306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 00:43:30.181635   60306 ssh_runner.go:195] Run: crio --version
	I1205 00:43:30.209526   60306 ssh_runner.go:195] Run: crio --version
	I1205 00:43:30.242578   60306 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 00:43:30.243973   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetIP
	I1205 00:43:30.247736   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:30.248189   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:43:19 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:43:30.248212   60306 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:43:30.248520   60306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 00:43:30.252642   60306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 00:43:30.264736   60306 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-037038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-037038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 00:43:30.264866   60306 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 00:43:30.264970   60306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:43:30.295635   60306 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 00:43:30.295708   60306 ssh_runner.go:195] Run: which lz4
	I1205 00:43:30.302382   60306 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 00:43:30.309891   60306 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 00:43:30.309925   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 00:43:31.814222   60306 crio.go:462] duration metric: took 1.511871976s to copy over tarball
	I1205 00:43:31.814317   60306 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 00:43:34.372211   60306 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.55785342s)
	I1205 00:43:34.372251   60306 crio.go:469] duration metric: took 2.557997098s to extract the tarball
	I1205 00:43:34.372261   60306 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 00:43:34.415324   60306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:43:34.457732   60306 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 00:43:34.457754   60306 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 00:43:34.457820   60306 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 00:43:34.457832   60306 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 00:43:34.457845   60306 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 00:43:34.457883   60306 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 00:43:34.457900   60306 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 00:43:34.457932   60306 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 00:43:34.457954   60306 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 00:43:34.457926   60306 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 00:43:34.459444   60306 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 00:43:34.459468   60306 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 00:43:34.459475   60306 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 00:43:34.459496   60306 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 00:43:34.459444   60306 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 00:43:34.459445   60306 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 00:43:34.459446   60306 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 00:43:34.459452   60306 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 00:43:34.666057   60306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 00:43:34.673429   60306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 00:43:34.708798   60306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 00:43:34.709982   60306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 00:43:34.711255   60306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 00:43:34.711951   60306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 00:43:34.716045   60306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 00:43:34.755230   60306 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 00:43:34.755290   60306 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 00:43:34.755308   60306 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 00:43:34.755339   60306 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 00:43:34.755388   60306 ssh_runner.go:195] Run: which crictl
	I1205 00:43:34.755340   60306 ssh_runner.go:195] Run: which crictl
	I1205 00:43:34.847045   60306 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 00:43:34.847096   60306 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 00:43:34.847156   60306 ssh_runner.go:195] Run: which crictl
	I1205 00:43:34.847225   60306 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 00:43:34.847263   60306 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 00:43:34.847277   60306 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 00:43:34.847293   60306 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 00:43:34.847309   60306 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 00:43:34.847323   60306 ssh_runner.go:195] Run: which crictl
	I1205 00:43:34.847329   60306 ssh_runner.go:195] Run: which crictl
	I1205 00:43:34.847340   60306 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 00:43:34.847377   60306 ssh_runner.go:195] Run: which crictl
	I1205 00:43:34.857490   60306 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 00:43:34.857540   60306 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 00:43:34.857555   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 00:43:34.857577   60306 ssh_runner.go:195] Run: which crictl
	I1205 00:43:34.857583   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 00:43:34.857670   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 00:43:34.859294   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 00:43:34.859304   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 00:43:34.859374   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 00:43:34.993076   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 00:43:34.995992   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 00:43:34.996066   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 00:43:34.996083   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 00:43:34.996013   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 00:43:34.996134   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 00:43:34.996201   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 00:43:35.094591   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 00:43:35.158221   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 00:43:35.158266   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 00:43:35.158321   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 00:43:35.158363   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 00:43:35.158370   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 00:43:35.158428   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 00:43:35.192099   60306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 00:43:35.308527   60306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 00:43:35.308569   60306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 00:43:35.308634   60306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 00:43:35.308682   60306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 00:43:35.308711   60306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 00:43:35.308736   60306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 00:43:35.318683   60306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 00:43:35.715985   60306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 00:43:35.854494   60306 cache_images.go:92] duration metric: took 1.396723575s to LoadCachedImages
	W1205 00:43:35.854581   60306 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20045-14126/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1205 00:43:35.854600   60306 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.20.0 crio true true} ...
	I1205 00:43:35.854739   60306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-037038 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-037038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 00:43:35.854832   60306 ssh_runner.go:195] Run: crio config
	I1205 00:43:35.900472   60306 cni.go:84] Creating CNI manager for ""
	I1205 00:43:35.900494   60306 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 00:43:35.900503   60306 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 00:43:35.900519   60306 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-037038 NodeName:kubernetes-upgrade-037038 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 00:43:35.900651   60306 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-037038"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 00:43:35.900708   60306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 00:43:35.913554   60306 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 00:43:35.913625   60306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 00:43:35.923863   60306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1205 00:43:35.940362   60306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 00:43:35.956536   60306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1205 00:43:35.974418   60306 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I1205 00:43:35.978080   60306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 00:43:35.990858   60306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:43:36.110815   60306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:43:36.127910   60306 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038 for IP: 192.168.39.136
	I1205 00:43:36.127935   60306 certs.go:194] generating shared ca certs ...
	I1205 00:43:36.127957   60306 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:43:36.128157   60306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1205 00:43:36.128212   60306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1205 00:43:36.128226   60306 certs.go:256] generating profile certs ...
	I1205 00:43:36.128309   60306 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/client.key
	I1205 00:43:36.128336   60306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/client.crt with IP's: []
	I1205 00:43:36.330987   60306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/client.crt ...
	I1205 00:43:36.331023   60306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/client.crt: {Name:mkfc4f7b41535adabafd409589b59f2161ee72c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:43:36.331211   60306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/client.key ...
	I1205 00:43:36.331229   60306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/client.key: {Name:mk3b8b47b3ae7b8caa8dbdcd391a11028933b81d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:43:36.331332   60306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.key.b5fcc2fe
	I1205 00:43:36.331356   60306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.crt.b5fcc2fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.136]
	I1205 00:43:36.479674   60306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.crt.b5fcc2fe ...
	I1205 00:43:36.479703   60306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.crt.b5fcc2fe: {Name:mk3f936820927a9a419a8e7ea94d7192f6b89f88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:43:36.479861   60306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.key.b5fcc2fe ...
	I1205 00:43:36.479874   60306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.key.b5fcc2fe: {Name:mk13afd80dbf7703a7802315a4415a8c4ba52926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:43:36.479942   60306 certs.go:381] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.crt.b5fcc2fe -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.crt
	I1205 00:43:36.480012   60306 certs.go:385] copying /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.key.b5fcc2fe -> /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.key
	I1205 00:43:36.480063   60306 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.key
	I1205 00:43:36.480079   60306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.crt with IP's: []
	I1205 00:43:36.645934   60306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.crt ...
	I1205 00:43:36.645975   60306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.crt: {Name:mk20ed589388918c410a2e54acf5b07d1edfb8b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:43:36.646138   60306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.key ...
	I1205 00:43:36.646151   60306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.key: {Name:mkae653093d6193d653b86db1ad63a81ccc0ca6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:43:36.646349   60306 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1205 00:43:36.646387   60306 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1205 00:43:36.646394   60306 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 00:43:36.646417   60306 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1205 00:43:36.646439   60306 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1205 00:43:36.646460   60306 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1205 00:43:36.646498   60306 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:43:36.647079   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 00:43:36.672307   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 00:43:36.698055   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 00:43:36.723314   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 00:43:36.745821   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 00:43:36.768737   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 00:43:36.791385   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 00:43:36.813877   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 00:43:36.839085   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 00:43:36.862747   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1205 00:43:36.885955   60306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1205 00:43:36.908752   60306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 00:43:36.925286   60306 ssh_runner.go:195] Run: openssl version
	I1205 00:43:36.931194   60306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1205 00:43:36.947522   60306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1205 00:43:36.952643   60306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:43:36.952713   60306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1205 00:43:36.958802   60306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 00:43:36.971746   60306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 00:43:36.982805   60306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:43:36.989301   60306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:43:36.989371   60306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:43:36.995304   60306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 00:43:37.005848   60306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1205 00:43:37.016431   60306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1205 00:43:37.022250   60306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:43:37.022322   60306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1205 00:43:37.028361   60306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1205 00:43:37.042826   60306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:43:37.047576   60306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 00:43:37.047639   60306 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-037038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-037038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:43:37.047725   60306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 00:43:37.047788   60306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:43:37.090285   60306 cri.go:89] found id: ""
	I1205 00:43:37.090364   60306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 00:43:37.101809   60306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 00:43:37.112722   60306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 00:43:37.121957   60306 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 00:43:37.121982   60306 kubeadm.go:157] found existing configuration files:
	
	I1205 00:43:37.122032   60306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 00:43:37.130519   60306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 00:43:37.130578   60306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 00:43:37.139574   60306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 00:43:37.148217   60306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 00:43:37.148289   60306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 00:43:37.157466   60306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 00:43:37.166639   60306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 00:43:37.166706   60306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 00:43:37.175908   60306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 00:43:37.184760   60306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 00:43:37.184815   60306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 00:43:37.193717   60306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 00:43:37.436637   60306 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 00:45:35.408021   60306 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 00:45:35.408158   60306 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 00:45:35.409900   60306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 00:45:35.409983   60306 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 00:45:35.410101   60306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 00:45:35.410342   60306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 00:45:35.410504   60306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 00:45:35.410598   60306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 00:45:35.412486   60306 out.go:235]   - Generating certificates and keys ...
	I1205 00:45:35.412589   60306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 00:45:35.412682   60306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 00:45:35.412768   60306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 00:45:35.412865   60306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 00:45:35.412964   60306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 00:45:35.413051   60306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 00:45:35.413124   60306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 00:45:35.413320   60306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-037038 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I1205 00:45:35.413421   60306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 00:45:35.413600   60306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-037038 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I1205 00:45:35.413726   60306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 00:45:35.413828   60306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 00:45:35.413887   60306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 00:45:35.413990   60306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 00:45:35.414064   60306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 00:45:35.414143   60306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 00:45:35.414237   60306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 00:45:35.414343   60306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 00:45:35.414504   60306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 00:45:35.414602   60306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 00:45:35.414659   60306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 00:45:35.414751   60306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 00:45:35.416354   60306 out.go:235]   - Booting up control plane ...
	I1205 00:45:35.416477   60306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 00:45:35.416549   60306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 00:45:35.416623   60306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 00:45:35.416731   60306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 00:45:35.416943   60306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 00:45:35.417022   60306 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 00:45:35.417128   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:45:35.417365   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:45:35.417470   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:45:35.417723   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:45:35.417781   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:45:35.418027   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:45:35.418127   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:45:35.418386   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:45:35.418446   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:45:35.418614   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:45:35.418625   60306 kubeadm.go:310] 
	I1205 00:45:35.418659   60306 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 00:45:35.418694   60306 kubeadm.go:310] 		timed out waiting for the condition
	I1205 00:45:35.418705   60306 kubeadm.go:310] 
	I1205 00:45:35.418733   60306 kubeadm.go:310] 	This error is likely caused by:
	I1205 00:45:35.418763   60306 kubeadm.go:310] 		- The kubelet is not running
	I1205 00:45:35.418888   60306 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 00:45:35.418902   60306 kubeadm.go:310] 
	I1205 00:45:35.418989   60306 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 00:45:35.419024   60306 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 00:45:35.419068   60306 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 00:45:35.419076   60306 kubeadm.go:310] 
	I1205 00:45:35.419186   60306 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 00:45:35.419286   60306 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 00:45:35.419298   60306 kubeadm.go:310] 
	I1205 00:45:35.419385   60306 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 00:45:35.419475   60306 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 00:45:35.419558   60306 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 00:45:35.419644   60306 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 00:45:35.419701   60306 kubeadm.go:310] 
	W1205 00:45:35.419842   60306 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-037038 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-037038 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-037038 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-037038 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 00:45:35.419888   60306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 00:45:35.937924   60306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 00:45:35.952404   60306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 00:45:35.962371   60306 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 00:45:35.962391   60306 kubeadm.go:157] found existing configuration files:
	
	I1205 00:45:35.962433   60306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 00:45:35.971298   60306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 00:45:35.971364   60306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 00:45:35.980581   60306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 00:45:35.989356   60306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 00:45:35.989429   60306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 00:45:35.998691   60306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 00:45:36.008207   60306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 00:45:36.008280   60306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 00:45:36.018121   60306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 00:45:36.027358   60306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 00:45:36.027417   60306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 00:45:36.036792   60306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 00:45:36.108562   60306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 00:45:36.108690   60306 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 00:45:36.268594   60306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 00:45:36.268871   60306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 00:45:36.269092   60306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 00:45:36.492811   60306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 00:45:36.494982   60306 out.go:235]   - Generating certificates and keys ...
	I1205 00:45:36.495079   60306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 00:45:36.495173   60306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 00:45:36.495312   60306 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 00:45:36.495420   60306 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 00:45:36.495530   60306 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 00:45:36.495605   60306 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 00:45:36.495886   60306 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 00:45:36.496288   60306 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 00:45:36.496848   60306 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 00:45:36.497399   60306 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 00:45:36.497465   60306 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 00:45:36.497515   60306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 00:45:36.661531   60306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 00:45:36.784053   60306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 00:45:36.985365   60306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 00:45:37.226795   60306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 00:45:37.241424   60306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 00:45:37.243396   60306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 00:45:37.243500   60306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 00:45:37.376771   60306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 00:45:37.378946   60306 out.go:235]   - Booting up control plane ...
	I1205 00:45:37.379087   60306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 00:45:37.392153   60306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 00:45:37.393975   60306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 00:45:37.395001   60306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 00:45:37.397597   60306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 00:46:17.399517   60306 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 00:46:17.400035   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:46:17.400325   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:46:22.400755   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:46:22.401058   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:46:32.401441   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:46:32.401611   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:46:52.401169   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:46:52.401396   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:47:32.401306   60306 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 00:47:32.401610   60306 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 00:47:32.401631   60306 kubeadm.go:310] 
	I1205 00:47:32.401699   60306 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 00:47:32.401765   60306 kubeadm.go:310] 		timed out waiting for the condition
	I1205 00:47:32.401778   60306 kubeadm.go:310] 
	I1205 00:47:32.401824   60306 kubeadm.go:310] 	This error is likely caused by:
	I1205 00:47:32.401870   60306 kubeadm.go:310] 		- The kubelet is not running
	I1205 00:47:32.402027   60306 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 00:47:32.402040   60306 kubeadm.go:310] 
	I1205 00:47:32.402178   60306 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 00:47:32.402235   60306 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 00:47:32.402286   60306 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 00:47:32.402314   60306 kubeadm.go:310] 
	I1205 00:47:32.402467   60306 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 00:47:32.402597   60306 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 00:47:32.402612   60306 kubeadm.go:310] 
	I1205 00:47:32.402770   60306 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 00:47:32.402897   60306 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 00:47:32.403010   60306 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 00:47:32.403111   60306 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 00:47:32.403122   60306 kubeadm.go:310] 
	I1205 00:47:32.403711   60306 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 00:47:32.403857   60306 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 00:47:32.403950   60306 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 00:47:32.404030   60306 kubeadm.go:394] duration metric: took 3m55.356394895s to StartCluster
	I1205 00:47:32.404083   60306 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 00:47:32.404146   60306 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 00:47:32.465334   60306 cri.go:89] found id: ""
	I1205 00:47:32.465364   60306 logs.go:282] 0 containers: []
	W1205 00:47:32.465375   60306 logs.go:284] No container was found matching "kube-apiserver"
	I1205 00:47:32.465384   60306 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 00:47:32.465448   60306 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 00:47:32.505517   60306 cri.go:89] found id: ""
	I1205 00:47:32.505542   60306 logs.go:282] 0 containers: []
	W1205 00:47:32.505551   60306 logs.go:284] No container was found matching "etcd"
	I1205 00:47:32.505559   60306 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 00:47:32.505624   60306 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 00:47:32.546640   60306 cri.go:89] found id: ""
	I1205 00:47:32.546665   60306 logs.go:282] 0 containers: []
	W1205 00:47:32.546675   60306 logs.go:284] No container was found matching "coredns"
	I1205 00:47:32.546683   60306 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 00:47:32.546753   60306 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 00:47:32.585130   60306 cri.go:89] found id: ""
	I1205 00:47:32.585159   60306 logs.go:282] 0 containers: []
	W1205 00:47:32.585168   60306 logs.go:284] No container was found matching "kube-scheduler"
	I1205 00:47:32.585176   60306 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 00:47:32.585231   60306 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 00:47:32.618443   60306 cri.go:89] found id: ""
	I1205 00:47:32.618482   60306 logs.go:282] 0 containers: []
	W1205 00:47:32.618498   60306 logs.go:284] No container was found matching "kube-proxy"
	I1205 00:47:32.618511   60306 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 00:47:32.618592   60306 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 00:47:32.651608   60306 cri.go:89] found id: ""
	I1205 00:47:32.651647   60306 logs.go:282] 0 containers: []
	W1205 00:47:32.651659   60306 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 00:47:32.651668   60306 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 00:47:32.651743   60306 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 00:47:32.701916   60306 cri.go:89] found id: ""
	I1205 00:47:32.701968   60306 logs.go:282] 0 containers: []
	W1205 00:47:32.701981   60306 logs.go:284] No container was found matching "kindnet"
	I1205 00:47:32.701994   60306 logs.go:123] Gathering logs for kubelet ...
	I1205 00:47:32.702009   60306 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 00:47:32.752928   60306 logs.go:123] Gathering logs for dmesg ...
	I1205 00:47:32.752962   60306 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 00:47:32.768828   60306 logs.go:123] Gathering logs for describe nodes ...
	I1205 00:47:32.768872   60306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 00:47:32.955576   60306 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 00:47:32.955604   60306 logs.go:123] Gathering logs for CRI-O ...
	I1205 00:47:32.955620   60306 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 00:47:33.106480   60306 logs.go:123] Gathering logs for container status ...
	I1205 00:47:33.106514   60306 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 00:47:33.163480   60306 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 00:47:33.163547   60306 out.go:270] * 
	* 
	W1205 00:47:33.163615   60306 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 00:47:33.163633   60306 out.go:270] * 
	* 
	W1205 00:47:33.164822   60306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 00:47:33.169106   60306 out.go:201] 
	W1205 00:47:33.170540   60306 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 00:47:33.170598   60306 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 00:47:33.170634   60306 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 00:47:33.172236   60306 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-037038
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-037038: (1.565759446s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-037038 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-037038 status --format={{.Host}}: exit status 7 (80.135493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.947303352s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-037038 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (112.19399ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-037038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-037038
	    minikube start -p kubernetes-upgrade-037038 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0370382 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-037038 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1205 00:49:50.091424   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-037038 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m16.683951402s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-05 00:51:23.702596051 +0000 UTC m=+6013.466396124
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-037038 -n kubernetes-upgrade-037038
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-037038 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-037038 logs -n 25: (2.184410424s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-622174 sudo find            | cilium-622174             | jenkins | v1.34.0 | 05 Dec 24 00:46 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-622174 sudo crio            | cilium-622174             | jenkins | v1.34.0 | 05 Dec 24 00:46 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-622174                      | cilium-622174             | jenkins | v1.34.0 | 05 Dec 24 00:46 UTC | 05 Dec 24 00:46 UTC |
	| start   | -p cert-expiration-787684             | cert-expiration-787684    | jenkins | v1.34.0 | 05 Dec 24 00:46 UTC | 05 Dec 24 00:48 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-383287           | force-systemd-env-383287  | jenkins | v1.34.0 | 05 Dec 24 00:47 UTC | 05 Dec 24 00:47 UTC |
	| start   | -p cert-options-288883                | cert-options-288883       | jenkins | v1.34.0 | 05 Dec 24 00:47 UTC | 05 Dec 24 00:48 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-037038          | kubernetes-upgrade-037038 | jenkins | v1.34.0 | 05 Dec 24 00:47 UTC | 05 Dec 24 00:47 UTC |
	| start   | -p kubernetes-upgrade-037038          | kubernetes-upgrade-037038 | jenkins | v1.34.0 | 05 Dec 24 00:47 UTC | 05 Dec 24 00:49 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-325690 ssh cat     | force-systemd-flag-325690 | jenkins | v1.34.0 | 05 Dec 24 00:47 UTC | 05 Dec 24 00:47 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-325690          | force-systemd-flag-325690 | jenkins | v1.34.0 | 05 Dec 24 00:47 UTC | 05 Dec 24 00:47 UTC |
	| start   | -p pause-480268 --memory=2048         | pause-480268              | jenkins | v1.34.0 | 05 Dec 24 00:47 UTC | 05 Dec 24 00:50 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-288883 ssh               | cert-options-288883       | jenkins | v1.34.0 | 05 Dec 24 00:48 UTC | 05 Dec 24 00:48 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-288883 -- sudo        | cert-options-288883       | jenkins | v1.34.0 | 05 Dec 24 00:48 UTC | 05 Dec 24 00:48 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-288883                | cert-options-288883       | jenkins | v1.34.0 | 05 Dec 24 00:48 UTC | 05 Dec 24 00:48 UTC |
	| start   | -p old-k8s-version-403347             | old-k8s-version-403347    | jenkins | v1.34.0 | 05 Dec 24 00:48 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-037038          | kubernetes-upgrade-037038 | jenkins | v1.34.0 | 05 Dec 24 00:49 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-037038          | kubernetes-upgrade-037038 | jenkins | v1.34.0 | 05 Dec 24 00:49 UTC | 05 Dec 24 00:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-480268                       | pause-480268              | jenkins | v1.34.0 | 05 Dec 24 00:50 UTC | 05 Dec 24 00:50 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-480268                       | pause-480268              | jenkins | v1.34.0 | 05 Dec 24 00:50 UTC | 05 Dec 24 00:50 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-480268                       | pause-480268              | jenkins | v1.34.0 | 05 Dec 24 00:50 UTC | 05 Dec 24 00:50 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-480268                       | pause-480268              | jenkins | v1.34.0 | 05 Dec 24 00:50 UTC | 05 Dec 24 00:50 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-480268                       | pause-480268              | jenkins | v1.34.0 | 05 Dec 24 00:50 UTC | 05 Dec 24 00:50 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-480268                       | pause-480268              | jenkins | v1.34.0 | 05 Dec 24 00:50 UTC | 05 Dec 24 00:50 UTC |
	| start   | -p no-preload-001004                  | no-preload-001004         | jenkins | v1.34.0 | 05 Dec 24 00:50 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	| start   | -p cert-expiration-787684             | cert-expiration-787684    | jenkins | v1.34.0 | 05 Dec 24 00:51 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 00:51:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 00:51:14.077188   69006 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:51:14.077309   69006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:51:14.077312   69006 out.go:358] Setting ErrFile to fd 2...
	I1205 00:51:14.077316   69006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:51:14.077506   69006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:51:14.078092   69006 out.go:352] Setting JSON to false
	I1205 00:51:14.079005   69006 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9220,"bootTime":1733350654,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 00:51:14.079096   69006 start.go:139] virtualization: kvm guest
	I1205 00:51:14.081142   69006 out.go:177] * [cert-expiration-787684] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 00:51:14.082362   69006 notify.go:220] Checking for updates...
	I1205 00:51:14.082405   69006 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:51:14.083739   69006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:51:14.085113   69006 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1205 00:51:14.086347   69006 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1205 00:51:14.087611   69006 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 00:51:14.088734   69006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:51:14.090385   69006 config.go:182] Loaded profile config "cert-expiration-787684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:51:14.090946   69006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:51:14.090989   69006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:51:14.108488   69006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I1205 00:51:14.109065   69006 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:51:14.109688   69006 main.go:141] libmachine: Using API Version  1
	I1205 00:51:14.109707   69006 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:51:14.110212   69006 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:51:14.110406   69006 main.go:141] libmachine: (cert-expiration-787684) Calling .DriverName
	I1205 00:51:14.110635   69006 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:51:14.111038   69006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:51:14.111070   69006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:51:14.127136   69006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I1205 00:51:14.127563   69006 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:51:14.128041   69006 main.go:141] libmachine: Using API Version  1
	I1205 00:51:14.128058   69006 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:51:14.128344   69006 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:51:14.128517   69006 main.go:141] libmachine: (cert-expiration-787684) Calling .DriverName
	I1205 00:51:14.168136   69006 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 00:51:14.169675   69006 start.go:297] selected driver: kvm2
	I1205 00:51:14.169682   69006 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-787684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:cert-expiration-787684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:51:14.169776   69006 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:51:14.170564   69006 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:51:14.170651   69006 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 00:51:14.190741   69006 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 00:51:14.191244   69006 cni.go:84] Creating CNI manager for ""
	I1205 00:51:14.191310   69006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 00:51:14.191369   69006 start.go:340] cluster config:
	{Name:cert-expiration-787684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:cert-expiration-787684 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:51:14.191506   69006 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:51:14.193570   69006 out.go:177] * Starting "cert-expiration-787684" primary control-plane node in "cert-expiration-787684" cluster
	I1205 00:51:14.194855   69006 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:51:14.194891   69006 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 00:51:14.194898   69006 cache.go:56] Caching tarball of preloaded images
	I1205 00:51:14.194981   69006 preload.go:172] Found /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 00:51:14.194989   69006 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 00:51:14.195109   69006 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/cert-expiration-787684/config.json ...
	I1205 00:51:14.195353   69006 start.go:360] acquireMachinesLock for cert-expiration-787684: {Name:mk6f272cafde86bba070a3b6c7e365bce62bc393 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 00:51:14.195397   69006 start.go:364] duration metric: took 29.097µs to acquireMachinesLock for "cert-expiration-787684"
	I1205 00:51:14.195408   69006 start.go:96] Skipping create...Using existing machine configuration
	I1205 00:51:14.195412   69006 fix.go:54] fixHost starting: 
	I1205 00:51:14.195733   69006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:51:14.195761   69006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:51:14.210825   69006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I1205 00:51:14.211322   69006 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:51:14.211760   69006 main.go:141] libmachine: Using API Version  1
	I1205 00:51:14.211773   69006 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:51:14.212133   69006 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:51:14.212373   69006 main.go:141] libmachine: (cert-expiration-787684) Calling .DriverName
	I1205 00:51:14.212577   69006 main.go:141] libmachine: (cert-expiration-787684) Calling .GetState
	I1205 00:51:14.214324   69006 fix.go:112] recreateIfNeeded on cert-expiration-787684: state=Running err=<nil>
	W1205 00:51:14.214338   69006 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 00:51:14.216122   69006 out.go:177] * Updating the running kvm2 "cert-expiration-787684" VM ...
	I1205 00:51:12.813433   67883 main.go:141] libmachine: (kubernetes-upgrade-037038) Calling .GetIP
	I1205 00:51:12.816398   67883 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:51:12.817057   67883 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:02:5e", ip: ""} in network mk-kubernetes-upgrade-037038: {Iface:virbr1 ExpiryTime:2024-12-05 01:48:29 +0000 UTC Type:0 Mac:52:54:00:48:02:5e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:kubernetes-upgrade-037038 Clientid:01:52:54:00:48:02:5e}
	I1205 00:51:12.817092   67883 main.go:141] libmachine: (kubernetes-upgrade-037038) DBG | domain kubernetes-upgrade-037038 has defined IP address 192.168.39.136 and MAC address 52:54:00:48:02:5e in network mk-kubernetes-upgrade-037038
	I1205 00:51:12.817315   67883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 00:51:12.822043   67883 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-037038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-037038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 00:51:12.822153   67883 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 00:51:12.822194   67883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:51:12.866180   67883 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:51:12.866206   67883 crio.go:433] Images already preloaded, skipping extraction
	I1205 00:51:12.866255   67883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:51:12.901208   67883 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 00:51:12.901230   67883 cache_images.go:84] Images are preloaded, skipping loading
	I1205 00:51:12.901237   67883 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.31.2 crio true true} ...
	I1205 00:51:12.901346   67883 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-037038 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-037038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 00:51:12.901426   67883 ssh_runner.go:195] Run: crio config
	I1205 00:51:12.951292   67883 cni.go:84] Creating CNI manager for ""
	I1205 00:51:12.951319   67883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 00:51:12.951338   67883 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 00:51:12.951367   67883 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-037038 NodeName:kubernetes-upgrade-037038 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 00:51:12.951518   67883 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-037038"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.136"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 00:51:12.951587   67883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 00:51:12.963440   67883 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 00:51:12.963519   67883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 00:51:12.975355   67883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1205 00:51:12.994619   67883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 00:51:13.012303   67883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 00:51:13.032984   67883 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I1205 00:51:13.037520   67883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:51:13.201620   67883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:51:13.217999   67883 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038 for IP: 192.168.39.136
	I1205 00:51:13.218082   67883 certs.go:194] generating shared ca certs ...
	I1205 00:51:13.218116   67883 certs.go:226] acquiring lock for ca certs: {Name:mk2982cdfe600633007f1f64161dce33fcee9952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:51:13.218315   67883 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key
	I1205 00:51:13.218372   67883 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key
	I1205 00:51:13.218385   67883 certs.go:256] generating profile certs ...
	I1205 00:51:13.218481   67883 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/client.key
	I1205 00:51:13.218548   67883 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.key.b5fcc2fe
	I1205 00:51:13.218601   67883 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.key
	I1205 00:51:13.218743   67883 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem (1338 bytes)
	W1205 00:51:13.218793   67883 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385_empty.pem, impossibly tiny 0 bytes
	I1205 00:51:13.218807   67883 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 00:51:13.218843   67883 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/ca.pem (1082 bytes)
	I1205 00:51:13.218883   67883 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/cert.pem (1123 bytes)
	I1205 00:51:13.218918   67883 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/certs/key.pem (1675 bytes)
	I1205 00:51:13.218979   67883 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem (1708 bytes)
	I1205 00:51:13.219682   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 00:51:13.250754   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 00:51:13.276371   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 00:51:13.300066   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 00:51:13.328427   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 00:51:13.355073   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 00:51:13.383683   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 00:51:13.410683   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/kubernetes-upgrade-037038/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 00:51:13.438505   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/certs/21385.pem --> /usr/share/ca-certificates/21385.pem (1338 bytes)
	I1205 00:51:13.473733   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/ssl/certs/213852.pem --> /usr/share/ca-certificates/213852.pem (1708 bytes)
	I1205 00:51:13.506651   67883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-14126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 00:51:13.535486   67883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 00:51:13.553896   67883 ssh_runner.go:195] Run: openssl version
	I1205 00:51:13.559783   67883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 00:51:13.571341   67883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:51:13.575723   67883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:51:13.575781   67883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:51:13.581754   67883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 00:51:13.591766   67883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21385.pem && ln -fs /usr/share/ca-certificates/21385.pem /etc/ssl/certs/21385.pem"
	I1205 00:51:13.603021   67883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21385.pem
	I1205 00:51:13.607355   67883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:52 /usr/share/ca-certificates/21385.pem
	I1205 00:51:13.607411   67883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21385.pem
	I1205 00:51:13.613070   67883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21385.pem /etc/ssl/certs/51391683.0"
	I1205 00:51:13.624047   67883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/213852.pem && ln -fs /usr/share/ca-certificates/213852.pem /etc/ssl/certs/213852.pem"
	I1205 00:51:13.634973   67883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/213852.pem
	I1205 00:51:13.639629   67883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:52 /usr/share/ca-certificates/213852.pem
	I1205 00:51:13.639700   67883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/213852.pem
	I1205 00:51:13.646032   67883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/213852.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 00:51:13.656828   67883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:51:13.662566   67883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 00:51:13.668559   67883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 00:51:13.674333   67883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 00:51:13.680222   67883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 00:51:13.685729   67883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 00:51:13.691185   67883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 00:51:13.697379   67883 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-037038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-037038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:51:13.697465   67883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 00:51:13.697522   67883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:51:13.749257   67883 cri.go:89] found id: "a5d01a1f6bf0f95284addbf68c43586de8301cb773c88e7f94aea3881ed5025b"
	I1205 00:51:13.749278   67883 cri.go:89] found id: "281d8f234c322bc634d9ebcc741cb50d72c6569c785c2bbcd5c393ec791bb9a2"
	I1205 00:51:13.749281   67883 cri.go:89] found id: "2b39412e9cb9420a2cbd884fb3c366177ea2fc80bc1337b0eb39e632efac7009"
	I1205 00:51:13.749285   67883 cri.go:89] found id: "8262fd8eabc00a581cfa71efe12d7ddc7a29a0ddbbe0ac434144d7d4a5d735a2"
	I1205 00:51:13.749287   67883 cri.go:89] found id: "501081b1dcd7a16e912893d93dc37076a2c0e3327848be9af67352fd118d0fb4"
	I1205 00:51:13.749291   67883 cri.go:89] found id: "bd8f54f88db3ba9f841997285752ff9dc43e955a04086dd480bb9c2d9ed4e3a6"
	I1205 00:51:13.749293   67883 cri.go:89] found id: "90331b43772579b21433644c19e8c9cb53cbb17cca8a3390fd8fe9d1446267da"
	I1205 00:51:13.749295   67883 cri.go:89] found id: "5c3ea3086daf600f77d80016e05a369e0bd658eda8a5e85464933a928da72c7e"
	I1205 00:51:13.749298   67883 cri.go:89] found id: "7c7d2161716e1a9580927aab40c0683dc6872fe8a5aecbc78e8d7b759fc339f5"
	I1205 00:51:13.749303   67883 cri.go:89] found id: "190de4913771e2392b1e58b1b7560285571335a5d1fba58102235bc7aac1c675"
	I1205 00:51:13.749306   67883 cri.go:89] found id: "eb3b4b1db826dd8e7a851c9006744ae956afcb30af90956158ac8e43804766e8"
	I1205 00:51:13.749308   67883 cri.go:89] found id: "4af7a4afeb94611211dafb5add82c899e6a76e102302900514c1c7565779bb71"
	I1205 00:51:13.749312   67883 cri.go:89] found id: "79756551573eedf38a406292f062e2e0be1a752df18f1c2e866f4e56001f7c09"
	I1205 00:51:13.749318   67883 cri.go:89] found id: "cf1c4b8cdb24974823cbaa8e5d22d0880ce9c938fcb655304fdeb7973be10411"
	I1205 00:51:13.749333   67883 cri.go:89] found id: "964dcf170e7c601f023d36c3a9c16d17ec73b296b6ebc638dfffae6a46eb9478"
	I1205 00:51:13.749342   67883 cri.go:89] found id: ""
	I1205 00:51:13.749387   67883 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-037038 -n kubernetes-upgrade-037038
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-037038 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-037038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-037038
--- FAIL: TestKubernetesUpgrade (502.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.054s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
E1205 01:10:07.014107   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.6:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.6:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (24m32s)
		TestStartStop (24m55s)
		TestStartStop/group/default-k8s-diff-port (19m42s)
		TestStartStop/group/default-k8s-diff-port/serial (19m42s)
		TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5m51s)
		TestStartStop/group/embed-certs (17m37s)
		TestStartStop/group/embed-certs/serial (17m37s)
		TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5m47s)
		TestStartStop/group/no-preload (20m22s)
		TestStartStop/group/no-preload/serial (20m22s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5m12s)
		TestStartStop/group/old-k8s-version (22m32s)
		TestStartStop/group/old-k8s-version/serial (22m32s)
		TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (2m31s)

                                                
                                                
goroutine 4053 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00072a4e0, 0xc00086bbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc0000124c8, {0x52ca940, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x4113b0?, 0x52f1100?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000793900)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000793900)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001e7580)
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2218 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0018864e0, {0x2c72897?, 0xc000508d70?}, 0xc001e06000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0018864e0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0018864e0, 0xc00078c880)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1867
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1969 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc000854d70)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015b1a00)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015b1a00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015b1a00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015b1a00, 0xc000972980)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1912
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2696 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3970788, 0xc000491e30}, {0x3964040, 0xc00071ef80}, 0x1, 0x0, 0xc000091c18)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3970788?, 0xc0005e30a0?}, 0x3b9aca00, 0xc000091e10?, 0x1, 0xc000091c18)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3970788, 0xc0005e30a0}, 0xc001886000, {0xc0001129e0, 0x1c}, {0x2c6d2e5, 0x14}, {0x2c833d8, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3970788, 0xc0005e30a0}, 0xc001886000, {0xc0001129e0, 0x1c}, {0x2c6fe98?, 0xc00152cf60?}, {0x559133?, 0x4b14af?}, {0xc00012c600, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001886000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001886000, 0xc000122000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2421
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1867 [chan receive, 23 minutes]:
testing.(*T).Run(0xc0015b0820, {0x2c4a7c9?, 0x0?}, 0xc00078c880)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015b0820)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0015b0820, 0xc000bae640)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1866
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2727 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3970788, 0xc00047e850}, {0x3964040, 0xc001665ce0}, 0x1, 0x0, 0xc001ff5c18)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3970788?, 0xc0007845b0?}, 0x3b9aca00, 0xc001407e10?, 0x1, 0xc001407c18)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3970788, 0xc0007845b0}, 0xc0020424e0, {0xc001ae02e8, 0x12}, {0x2c6d2e5, 0x14}, {0x2c833d8, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3970788, 0xc0007845b0}, 0xc0020424e0, {0xc001ae02e8, 0x12}, {0x2c55a09?, 0xc00152e760?}, {0x559133?, 0x4b14af?}, {0xc000bc4200, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0020424e0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0020424e0, 0xc00078cd00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2570
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1872 [chan receive, 17 minutes]:
testing.(*T).Run(0xc0015b1040, {0x2c4a7c9?, 0x0?}, 0xc0001e6800)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015b1040)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0015b1040, 0xc000bae800)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1866
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3304 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d0a6c0, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3302
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 408 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000baee90, 0x23)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000bead80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x398c360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000baeec0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001523740, {0x3937ba0, 0xc0015217a0}, 0x1, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001523740, 0x3b9aca00, 0x0, 0x1, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 345
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2443 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00072d1c0, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2438
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2795 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3970788, 0xc000784ee0}, {0x3964040, 0xc00165b6c0}, 0x1, 0x0, 0xc002029c18)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3970788?, 0xc00047a1c0?}, 0x3b9aca00, 0xc000befe10?, 0x1, 0xc000befc18)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3970788, 0xc00047a1c0}, 0xc002042340, {0xc001d14090, 0x11}, {0x2c6d2e5, 0x14}, {0x2c833d8, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3970788, 0xc00047a1c0}, 0xc002042340, {0xc001d14090, 0x11}, {0x2c53b2f?, 0xc001eadf60?}, {0x559133?, 0x4b14af?}, {0xc00012c500, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002042340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002042340, 0xc001e06100)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2377
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2451 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2450
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1866 [chan receive, 26 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0015b0340, 0x35f9dd0)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1767
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3287 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001d0a690, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00152cd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x398c360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d0a6c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013ce4c0, {0x3937ba0, 0xc0016d6540}, 0x1, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013ce4c0, 0x3b9aca00, 0x0, 0x1, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3304
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1657 [chan receive, 26 minutes]:
testing.(*T).Run(0xc00072a680, {0x2c493cb?, 0x55927c?}, 0xc00156e780)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00072a680)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc00072a680, 0x35f9b90)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 344 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3966d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 362
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2570 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0018869c0, {0x2c72897?, 0xc00050b570?}, 0xc00078cd00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0018869c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0018869c0, 0xc0001e6800)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1872
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 219 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f2980bf5fe0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0001e6a00?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001e6a00)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0001e6a00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0009705c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0009705c0)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc001ebe0f0, {0x39639e0, 0xc0009705c0})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc001ebe0f0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x6e694c5f4d564b2f?, 0xc000033d40)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 216
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2002 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc000854d70)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015b1ba0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015b1ba0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015b1ba0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015b1ba0, 0xc000972a00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1912
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 814 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc0016f8240)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 811
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 2724 [IO wait]:
internal/poll.runtime_pollWait(0x7f298840a748, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000122f00?, 0xc000bfc000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000122f00, {0xc000bfc000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000122f00, {0xc000bfc000?, 0x10?, 0xc0016e38a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000974088, {0xc000bfc000?, 0xc000bfc005?, 0x6f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0014ca0c0, {0xc000bfc000?, 0x0?, 0xc0014ca0c0?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc001afc2b8, {0x39381e0, 0xc0014ca0c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001afc008, {0x7f29880f9228, 0xc00155a030}, 0xc0016e3a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001afc008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc001afc008, {0xc001995000, 0x1000, 0xc001f0b340?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0019fefc0, {0xc00201e3c0, 0x9, 0x527b360?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x39362a0, 0xc0019fefc0}, {0xc00201e3c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00201e3c0, 0x9, 0x47b545?}, {0x39362a0?, 0xc0019fefc0?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00201e380)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0016e3fa8)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:2456 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc002042000)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:2335 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2723
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:898 +0xddb

                                                
                                                
goroutine 771 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014eaa80, 0xc00149dc70)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 241
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3288 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3970a90, 0xc00074c0e0}, 0xc001f02f50, 0xc001f02f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3970a90, 0xc00074c0e0}, 0x0?, 0xc001f02f50, 0xc001f02f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3970a90?, 0xc00074c0e0?}, 0x9ec6d6?, 0xc0018ed980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0018ed980?, 0x593f04?, 0xc001e06480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3304
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1870 [chan receive, 20 minutes]:
testing.(*T).Run(0xc0015b0d00, {0x2c4a7c9?, 0x0?}, 0xc000972480)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015b0d00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0015b0d00, 0xc000bae740)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1866
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1913 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc000854d70)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015b16c0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015b16c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015b16c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015b16c0, 0xc000972280)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1912
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1869 [chan receive, 19 minutes]:
testing.(*T).Run(0xc0015b0b60, {0x2c4a7c9?, 0x0?}, 0xc00078c580)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015b0b60)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0015b0b60, 0xc000bae700)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1866
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2421 [chan receive, 7 minutes]:
testing.(*T).Run(0xc00072b040, {0x2c72897?, 0xc001eac570?}, 0xc000122000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00072b040)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00072b040, 0xc00078c580)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1869
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3303 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3966d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3302
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 813 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc0016f8240)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 811
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 2377 [chan receive, 5 minutes]:
testing.(*T).Run(0xc00072bba0, {0x2c72897?, 0xc00152cd70?}, 0xc001e06100)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00072bba0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00072bba0, 0xc000972480)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1870
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 345 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000baeec0, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 362
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2450 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3970a90, 0xc00074c0e0}, 0xc00152cf50, 0xc001f07f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3970a90, 0xc00074c0e0}, 0x0?, 0xc00152cf50, 0xc00152cf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3970a90?, 0xc00074c0e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa787e5?, 0xc001a00360?, 0x3966d20?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2443
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 409 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3970a90, 0xc00074c0e0}, 0xc001429f50, 0xc00141ef98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3970a90, 0xc00074c0e0}, 0x60?, 0xc001429f50, 0xc001429f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3970a90?, 0xc00074c0e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001429fd0?, 0x593f04?, 0xc000087260?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 345
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 410 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 409
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 508 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bca780, 0xc001b91110)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 507
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2003 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc000854d70)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015b1d40)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015b1d40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015b1d40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015b1d40, 0xc000972a80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1912
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1767 [chan receive, 26 minutes]:
testing.(*T).Run(0xc00019e4e0, {0x2c493cb?, 0x559133?}, 0x35f9dd0)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00019e4e0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00019e4e0, 0x35f9bd8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2431 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3970a90, 0xc00074c0e0}, 0xc001eaf750, 0xc001432f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3970a90, 0xc00074c0e0}, 0xd0?, 0xc001eaf750, 0xc001eaf798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3970a90?, 0xc00074c0e0?}, 0xc0015b1380?, 0x559a40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001eaf7d0?, 0x593f04?, 0xc00072c780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2394
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 644 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0009cf080, 0xc000175650)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 643
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2393 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3966d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2426
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2442 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3966d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2438
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1912 [chan receive, 26 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0015b1520, 0xc00156e780)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1657
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2369 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00072d190, 0x3)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000869d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x398c360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00072d1c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013f6a70, {0x3937ba0, 0xc001d84a80}, 0x1, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013f6a70, 0x3b9aca00, 0x0, 0x1, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2443
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2432 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2431
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3302 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3970788, 0xc0000247e0}, {0x3964040, 0xc001e92a20}, 0x1, 0x0, 0xc001ff9c18)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3970788?, 0xc000490850?}, 0x3b9aca00, 0xc001485e10?, 0x1, 0xc001485c18)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3970788, 0xc000490850}, 0xc001886820, {0xc00054ae10, 0x16}, {0x2c6d2e5, 0x14}, {0x2c833d8, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3970788, 0xc000490850}, 0xc001886820, {0xc00054ae10, 0x16}, {0x2c5f16f?, 0xc001426f60?}, {0x559133?, 0x4b14af?}, {0xc0007f6a80, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001886820)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001886820, 0xc001e06000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2218
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2766 [IO wait]:
internal/poll.runtime_pollWait(0x7f2980bf6610, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001e06f80?, 0xc001c86000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001e06f80, {0xc001c86000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001e06f80, {0xc001c86000?, 0x10?, 0xc0000cf8a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0008ce418, {0xc001c86000?, 0xc001c8605e?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001c631a0, {0xc001c86000?, 0x0?, 0xc001c631a0?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc001afc638, {0x39381e0, 0xc001c631a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001afc388, {0x7f29880f9228, 0xc00155a7c8}, 0xc0000cfa10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001afc388, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc001afc388, {0xc000138000, 0x1000, 0xc0019b6700?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0019fee40, {0xc000976ac0, 0x9, 0x527b360?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x39362a0, 0xc0019fee40}, {0xc000976ac0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000976ac0, 0x9, 0x47b545?}, {0x39362a0?, 0xc0019fee40?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000976a80)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0000cffa8)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:2456 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001886680)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:2335 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2765
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:898 +0xddb

                                                
                                                
goroutine 2394 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000970880, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2426
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2004 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc000854d70)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000032000)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000032000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000032000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000032000, 0xc000972b00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1912
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2032 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc000854d70)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0018861a0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0018861a0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0018861a0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0018861a0, 0xc000632d80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1912
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2033 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc000854d70)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001886340)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001886340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001886340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001886340, 0xc000632e00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1912
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2430 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000970850, 0x3)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000867d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x398c360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000970880)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001cdcba0, {0x3937ba0, 0xc001d08510}, 0x1, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001cdcba0, 0x3b9aca00, 0x0, 0x1, 0xc00074c0e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2394
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2712 [IO wait]:
internal/poll.runtime_pollWait(0x7f2980bf5bc0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00078dd00?, 0xc001954000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00078dd00, {0xc001954000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc00078dd00, {0xc001954000?, 0x9d9372?, 0xc0000d29a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001526118, {0xc001954000?, 0xc001e928e0?, 0xc00195405f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001c63140, {0xc001954000?, 0x0?, 0xc001c63140?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc00022f438, {0x39381e0, 0xc001c63140})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00022f188, {0x39376c0, 0xc001526118}, 0xc0000d2a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00022f188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc00022f188, {0xc0014b1000, 0x1000, 0xc0019b6700?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001c17c20, {0xc001e96120, 0x9, 0x527b360?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x39362a0, 0xc001c17c20}, {0xc001e96120, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001e96120, 0x9, 0x47b545?}, {0x39362a0?, 0xc001c17c20?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001e960e0)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0000d2fa8)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:2456 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0015b0000)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:2335 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2711
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.31.0/http2/transport.go:898 +0xddb

                                                
                                                
goroutine 3289 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3288
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (169/213)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.98
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 15
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.62
22 TestOffline 82.82
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
28 TestCertOptions 72.3
29 TestCertExpiration 307.98
31 TestForceSystemdFlag 82.79
32 TestForceSystemdEnv 70.04
34 TestKVMDriverInstallOrUpdate 3.86
38 TestErrorSpam/setup 39.22
39 TestErrorSpam/start 0.34
40 TestErrorSpam/status 0.71
41 TestErrorSpam/pause 1.53
42 TestErrorSpam/unpause 1.69
43 TestErrorSpam/stop 5.69
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 54.2
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 38.77
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.66
55 TestFunctional/serial/CacheCmd/cache/add_local 2.09
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
60 TestFunctional/serial/CacheCmd/cache/delete 0.1
61 TestFunctional/serial/MinikubeKubectlCmd 0.11
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 32.7
64 TestFunctional/serial/ComponentHealth 0.07
65 TestFunctional/serial/LogsCmd 1.4
66 TestFunctional/serial/LogsFileCmd 1.42
67 TestFunctional/serial/InvalidService 4.37
69 TestFunctional/parallel/ConfigCmd 0.37
70 TestFunctional/parallel/DashboardCmd 8.88
71 TestFunctional/parallel/DryRun 0.26
72 TestFunctional/parallel/InternationalLanguage 0.15
73 TestFunctional/parallel/StatusCmd 0.89
77 TestFunctional/parallel/ServiceCmdConnect 19.54
78 TestFunctional/parallel/AddonsCmd 0.11
79 TestFunctional/parallel/PersistentVolumeClaim 46.44
81 TestFunctional/parallel/SSHCmd 0.39
82 TestFunctional/parallel/CpCmd 1.24
83 TestFunctional/parallel/MySQL 25.57
84 TestFunctional/parallel/FileSync 0.21
85 TestFunctional/parallel/CertSync 1.28
89 TestFunctional/parallel/NodeLabels 0.1
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
93 TestFunctional/parallel/License 0.62
94 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
95 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
96 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
97 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
98 TestFunctional/parallel/ImageCommands/ImageBuild 3.73
99 TestFunctional/parallel/ImageCommands/Setup 1.73
100 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
101 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
102 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
103 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.37
104 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.52
116 TestFunctional/parallel/MountCmd/any-port 17.03
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.24
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.75
119 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.12
121 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
122 TestFunctional/parallel/MountCmd/specific-port 2.11
123 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.41
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
126 TestFunctional/parallel/ProfileCmd/profile_list 0.35
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
128 TestFunctional/parallel/ServiceCmd/List 1.31
129 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
131 TestFunctional/parallel/ServiceCmd/Format 0.29
132 TestFunctional/parallel/ServiceCmd/URL 0.34
133 TestFunctional/delete_echo-server_images 0.03
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 197.79
140 TestMultiControlPlane/serial/DeployApp 6.63
141 TestMultiControlPlane/serial/PingHostFromPods 1.17
142 TestMultiControlPlane/serial/AddWorkerNode 53.2
143 TestMultiControlPlane/serial/NodeLabels 0.07
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
145 TestMultiControlPlane/serial/CopyFile 12.91
151 TestMultiControlPlane/serial/DeleteSecondaryNode 16.65
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
154 TestMultiControlPlane/serial/RestartCluster 354.07
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
156 TestMultiControlPlane/serial/AddSecondaryNode 77.22
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
161 TestJSONOutput/start/Command 53.36
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.63
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.59
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 6.62
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.2
189 TestMainNoArgs 0.05
190 TestMinikubeProfile 84.44
193 TestMountStart/serial/StartWithMountFirst 31.04
194 TestMountStart/serial/VerifyMountFirst 0.38
195 TestMountStart/serial/StartWithMountSecond 25.49
196 TestMountStart/serial/VerifyMountSecond 0.37
197 TestMountStart/serial/DeleteFirst 0.7
198 TestMountStart/serial/VerifyMountPostDelete 0.37
199 TestMountStart/serial/Stop 1.27
200 TestMountStart/serial/RestartStopped 22.59
201 TestMountStart/serial/VerifyMountPostStop 0.36
204 TestMultiNode/serial/FreshStart2Nodes 110.15
205 TestMultiNode/serial/DeployApp2Nodes 7.13
206 TestMultiNode/serial/PingHostFrom2Pods 0.76
207 TestMultiNode/serial/AddNode 51.34
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.55
210 TestMultiNode/serial/CopyFile 7.08
211 TestMultiNode/serial/StopNode 2.23
212 TestMultiNode/serial/StartAfterStop 39.24
214 TestMultiNode/serial/DeleteNode 2.25
216 TestMultiNode/serial/RestartMultiNode 178.22
217 TestMultiNode/serial/ValidateNameConflict 40.67
224 TestScheduledStopUnix 113.22
228 TestRunningBinaryUpgrade 190.48
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
234 TestNoKubernetes/serial/StartWithK8s 94.02
235 TestStoppedBinaryUpgrade/Setup 2.3
236 TestStoppedBinaryUpgrade/Upgrade 116.16
237 TestNoKubernetes/serial/StartWithStopK8s 37.71
238 TestNoKubernetes/serial/Start 28.28
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
240 TestNoKubernetes/serial/ProfileList 27.44
241 TestNoKubernetes/serial/Stop 1.32
242 TestNoKubernetes/serial/StartNoArgs 22.77
250 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
264 TestPause/serial/Start 129.56
267 TestPause/serial/SecondStartNoReconfiguration 39.62
268 TestPause/serial/Pause 0.7
269 TestPause/serial/VerifyStatus 0.24
270 TestPause/serial/Unpause 0.6
271 TestPause/serial/PauseAgain 0.73
272 TestPause/serial/DeletePaused 0.81
273 TestPause/serial/VerifyDeletedResources 0.59
x
+
TestDownloadOnly/v1.20.0/json-events (27.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-457798 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-457798 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.9808936s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1204 23:11:38.254758   21385 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1204 23:11:38.254861   21385 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-457798
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-457798: exit status 85 (59.849913ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-457798 | jenkins | v1.34.0 | 04 Dec 24 23:11 UTC |          |
	|         | -p download-only-457798        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:11:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:11:10.314021   21396 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:11:10.314135   21396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:11:10.314143   21396 out.go:358] Setting ErrFile to fd 2...
	I1204 23:11:10.314147   21396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:11:10.314336   21396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	W1204 23:11:10.314456   21396 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20045-14126/.minikube/config/config.json: open /home/jenkins/minikube-integration/20045-14126/.minikube/config/config.json: no such file or directory
	I1204 23:11:10.314988   21396 out.go:352] Setting JSON to true
	I1204 23:11:10.315882   21396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3216,"bootTime":1733350654,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:11:10.315937   21396 start.go:139] virtualization: kvm guest
	I1204 23:11:10.318561   21396 out.go:97] [download-only-457798] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1204 23:11:10.318682   21396 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 23:11:10.318724   21396 notify.go:220] Checking for updates...
	I1204 23:11:10.320072   21396 out.go:169] MINIKUBE_LOCATION=20045
	I1204 23:11:10.321410   21396 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:11:10.322738   21396 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:11:10.323891   21396 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:11:10.325117   21396 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1204 23:11:10.327461   21396 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 23:11:10.327644   21396 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:11:10.436407   21396 out.go:97] Using the kvm2 driver based on user configuration
	I1204 23:11:10.436430   21396 start.go:297] selected driver: kvm2
	I1204 23:11:10.436436   21396 start.go:901] validating driver "kvm2" against <nil>
	I1204 23:11:10.436741   21396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:10.436861   21396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 23:11:10.451886   21396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 23:11:10.451955   21396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:11:10.452543   21396 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1204 23:11:10.452719   21396 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 23:11:10.452750   21396 cni.go:84] Creating CNI manager for ""
	I1204 23:11:10.452817   21396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 23:11:10.452827   21396 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 23:11:10.452894   21396 start.go:340] cluster config:
	{Name:download-only-457798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-457798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:11:10.453101   21396 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:10.455073   21396 out.go:97] Downloading VM boot image ...
	I1204 23:11:10.455111   21396 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 23:11:20.043494   21396 out.go:97] Starting "download-only-457798" primary control-plane node in "download-only-457798" cluster
	I1204 23:11:20.043540   21396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 23:11:20.142335   21396 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1204 23:11:20.142363   21396 cache.go:56] Caching tarball of preloaded images
	I1204 23:11:20.142520   21396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 23:11:20.144419   21396 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 23:11:20.144450   21396 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1204 23:11:20.250573   21396 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-457798 host does not exist
	  To start a cluster, run: "minikube start -p download-only-457798"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-457798
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-791123 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-791123 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.9951389s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (15.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1204 23:11:53.573276   21385 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1204 23:11:53.573322   21385 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-791123
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-791123: exit status 85 (61.629743ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-457798 | jenkins | v1.34.0 | 04 Dec 24 23:11 UTC |                     |
	|         | -p download-only-457798        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Dec 24 23:11 UTC | 04 Dec 24 23:11 UTC |
	| delete  | -p download-only-457798        | download-only-457798 | jenkins | v1.34.0 | 04 Dec 24 23:11 UTC | 04 Dec 24 23:11 UTC |
	| start   | -o=json --download-only        | download-only-791123 | jenkins | v1.34.0 | 04 Dec 24 23:11 UTC |                     |
	|         | -p download-only-791123        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:11:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:11:38.618070   21668 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:11:38.618203   21668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:11:38.618214   21668 out.go:358] Setting ErrFile to fd 2...
	I1204 23:11:38.618220   21668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:11:38.618423   21668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1204 23:11:38.618969   21668 out.go:352] Setting JSON to true
	I1204 23:11:38.619799   21668 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3245,"bootTime":1733350654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:11:38.619897   21668 start.go:139] virtualization: kvm guest
	I1204 23:11:38.622151   21668 out.go:97] [download-only-791123] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:11:38.622310   21668 notify.go:220] Checking for updates...
	I1204 23:11:38.623704   21668 out.go:169] MINIKUBE_LOCATION=20045
	I1204 23:11:38.625137   21668 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:11:38.626313   21668 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:11:38.627521   21668 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:11:38.628580   21668 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1204 23:11:38.631075   21668 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 23:11:38.631277   21668 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:11:38.663440   21668 out.go:97] Using the kvm2 driver based on user configuration
	I1204 23:11:38.663468   21668 start.go:297] selected driver: kvm2
	I1204 23:11:38.663474   21668 start.go:901] validating driver "kvm2" against <nil>
	I1204 23:11:38.663965   21668 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:38.664061   21668 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20045-14126/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1204 23:11:38.679394   21668 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1204 23:11:38.679460   21668 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:11:38.679998   21668 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1204 23:11:38.680130   21668 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 23:11:38.680153   21668 cni.go:84] Creating CNI manager for ""
	I1204 23:11:38.680193   21668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1204 23:11:38.680202   21668 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 23:11:38.680252   21668 start.go:340] cluster config:
	{Name:download-only-791123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-791123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:11:38.680348   21668 iso.go:125] acquiring lock: {Name:mk930593c13a4382dca2f753c80ae255ec929812 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:38.682124   21668 out.go:97] Starting "download-only-791123" primary control-plane node in "download-only-791123" cluster
	I1204 23:11:38.682148   21668 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:38.889502   21668 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:11:38.889534   21668 cache.go:56] Caching tarball of preloaded images
	I1204 23:11:38.889746   21668 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:38.891676   21668 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1204 23:11:38.891704   21668 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1204 23:11:38.996595   21668 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20045-14126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-791123 host does not exist
	  To start a cluster, run: "minikube start -p download-only-791123"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-791123
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1204 23:11:54.137989   21385 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-565243 --alsologtostderr --binary-mirror http://127.0.0.1:39609 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-565243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-565243
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (82.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-802670 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-802670 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.088589359s)
helpers_test.go:175: Cleaning up "offline-crio-802670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-802670
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-802670: (1.728895271s)
--- PASS: TestOffline (82.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-633765
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-633765: exit status 85 (57.651794ms)

                                                
                                                
-- stdout --
	* Profile "addons-633765" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-633765"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-633765
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-633765: exit status 85 (59.038952ms)

                                                
                                                
-- stdout --
	* Profile "addons-633765" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-633765"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestCertOptions (72.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-288883 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-288883 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m10.88183161s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-288883 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-288883 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-288883 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-288883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-288883
--- PASS: TestCertOptions (72.30s)

                                                
                                    
x
+
TestCertExpiration (307.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-787684 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-787684 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m26.692882276s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-787684 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-787684 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.191585361s)
helpers_test.go:175: Cleaning up "cert-expiration-787684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-787684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-787684: (1.092946044s)
--- PASS: TestCertExpiration (307.98s)

                                                
                                    
x
+
TestForceSystemdFlag (82.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-325690 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-325690 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m21.825959152s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-325690 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-325690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-325690
--- PASS: TestForceSystemdFlag (82.79s)

                                                
                                    
x
+
TestForceSystemdEnv (70.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-383287 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-383287 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.066023181s)
helpers_test.go:175: Cleaning up "force-systemd-env-383287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-383287
--- PASS: TestForceSystemdEnv (70.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.86s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1205 00:47:51.056928   21385 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:47:51.057059   21385 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1205 00:47:51.083424   21385 install.go:62] docker-machine-driver-kvm2: exit status 1
W1205 00:47:51.083690   21385 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 00:47:51.083738   21385 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2609344915/001/docker-machine-driver-kvm2
I1205 00:47:51.322496   21385 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2609344915/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc00097cec0 gz:0xc00097cec8 tar:0xc00097ce70 tar.bz2:0xc00097ce80 tar.gz:0xc00097ce90 tar.xz:0xc00097cea0 tar.zst:0xc00097ceb0 tbz2:0xc00097ce80 tgz:0xc00097ce90 txz:0xc00097cea0 tzst:0xc00097ceb0 xz:0xc00097ced0 zip:0xc00097cee0 zst:0xc00097ced8] Getters:map[file:0xc000c08a70 http:0xc000bb0280 https:0xc000bb02d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 00:47:51.322534   21385 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2609344915/001/docker-machine-driver-kvm2
I1205 00:47:53.211557   21385 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:47:53.211647   21385 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1205 00:47:53.238561   21385 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1205 00:47:53.238594   21385 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1205 00:47:53.238665   21385 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 00:47:53.238706   21385 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2609344915/002/docker-machine-driver-kvm2
I1205 00:47:53.268032   21385 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2609344915/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc00097cec0 gz:0xc00097cec8 tar:0xc00097ce70 tar.bz2:0xc00097ce80 tar.gz:0xc00097ce90 tar.xz:0xc00097cea0 tar.zst:0xc00097ceb0 tbz2:0xc00097ce80 tgz:0xc00097ce90 txz:0xc00097cea0 tzst:0xc00097ceb0 xz:0xc00097ced0 zip:0xc00097cee0 zst:0xc00097ced8] Getters:map[file:0xc001cdc900 http:0xc0001484b0 https:0xc000148500] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 00:47:53.268082   21385 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2609344915/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.86s)

                                                
                                    
x
+
TestErrorSpam/setup (39.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-527176 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-527176 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-527176 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-527176 --driver=kvm2  --container-runtime=crio: (39.217349822s)
--- PASS: TestErrorSpam/setup (39.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (5.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 stop: (2.282715971s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 stop: (1.874705935s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-527176 --log_dir /tmp/nospam-527176 stop: (1.537074485s)
--- PASS: TestErrorSpam/stop (5.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20045-14126/.minikube/files/etc/test/nested/copy/21385/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524018 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-524018 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.198663235s)
--- PASS: TestFunctional/serial/StartWithProxy (54.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1204 23:53:39.755329   21385 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524018 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-524018 --alsologtostderr -v=8: (38.773733956s)
functional_test.go:663: soft start took 38.774498585s for "functional-524018" cluster.
I1204 23:54:18.529414   21385 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (38.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-524018 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 cache add registry.k8s.io/pause:3.1: (1.203566878s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 cache add registry.k8s.io/pause:3.3: (1.257539727s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 cache add registry.k8s.io/pause:latest: (1.194682388s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-524018 /tmp/TestFunctionalserialCacheCmdcacheadd_local4090586280/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cache add minikube-local-cache-test:functional-524018
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 cache add minikube-local-cache-test:functional-524018: (1.766917121s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cache delete minikube-local-cache-test:functional-524018
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-524018
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.293323ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 cache reload: (1.004607254s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 kubectl -- --context functional-524018 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-524018 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-524018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.700950003s)
functional_test.go:761: restart took 32.701056948s for "functional-524018" cluster.
I1204 23:54:59.388894   21385 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (32.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-524018 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 logs: (1.398627059s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 logs --file /tmp/TestFunctionalserialLogsFileCmd3086785758/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 logs --file /tmp/TestFunctionalserialLogsFileCmd3086785758/001/logs.txt: (1.41539436s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-524018 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-524018
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-524018: exit status 115 (310.933927ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.51:32366 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-524018 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 config get cpus: exit status 14 (64.93734ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 config get cpus: exit status 14 (52.638011ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-524018 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-524018 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 35812: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524018 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-524018 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (129.265745ms)

                                                
                                                
-- stdout --
	* [functional-524018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:55:35.388278   35702 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:55:35.388407   35702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:35.388416   35702 out.go:358] Setting ErrFile to fd 2...
	I1204 23:55:35.388422   35702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:35.388603   35702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1204 23:55:35.389111   35702 out.go:352] Setting JSON to false
	I1204 23:55:35.390114   35702 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5881,"bootTime":1733350654,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:55:35.390207   35702 start.go:139] virtualization: kvm guest
	I1204 23:55:35.392277   35702 out.go:177] * [functional-524018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:55:35.393588   35702 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:55:35.393592   35702 notify.go:220] Checking for updates...
	I1204 23:55:35.396006   35702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:55:35.397232   35702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:55:35.398646   35702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:35.399778   35702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:55:35.400938   35702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:55:35.402604   35702 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:55:35.402991   35702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:55:35.403045   35702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:55:35.417649   35702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46789
	I1204 23:55:35.418135   35702 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:55:35.418664   35702 main.go:141] libmachine: Using API Version  1
	I1204 23:55:35.418683   35702 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:55:35.418960   35702 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:55:35.419119   35702 main.go:141] libmachine: (functional-524018) Calling .DriverName
	I1204 23:55:35.419334   35702 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:55:35.419595   35702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:55:35.419624   35702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:55:35.433487   35702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
	I1204 23:55:35.433892   35702 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:55:35.434374   35702 main.go:141] libmachine: Using API Version  1
	I1204 23:55:35.434391   35702 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:55:35.434660   35702 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:55:35.434841   35702 main.go:141] libmachine: (functional-524018) Calling .DriverName
	I1204 23:55:35.467125   35702 out.go:177] * Using the kvm2 driver based on existing profile
	I1204 23:55:35.468464   35702 start.go:297] selected driver: kvm2
	I1204 23:55:35.468476   35702 start.go:901] validating driver "kvm2" against &{Name:functional-524018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-524018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:55:35.468594   35702 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:55:35.470668   35702 out.go:201] 
	W1204 23:55:35.471774   35702 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1204 23:55:35.472906   35702 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524018 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-524018 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-524018 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.145227ms)

                                                
                                                
-- stdout --
	* [functional-524018] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:55:34.362259   35583 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:55:34.362410   35583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:34.362421   35583 out.go:358] Setting ErrFile to fd 2...
	I1204 23:55:34.362428   35583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:55:34.362830   35583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1204 23:55:34.363519   35583 out.go:352] Setting JSON to false
	I1204 23:55:34.364764   35583 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5880,"bootTime":1733350654,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:55:34.364880   35583 start.go:139] virtualization: kvm guest
	I1204 23:55:34.367423   35583 out.go:177] * [functional-524018] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1204 23:55:34.369108   35583 notify.go:220] Checking for updates...
	I1204 23:55:34.369124   35583 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:55:34.370726   35583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:55:34.372270   35583 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	I1204 23:55:34.373654   35583 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	I1204 23:55:34.375131   35583 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:55:34.376405   35583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:55:34.378166   35583 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:55:34.378765   35583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:55:34.378822   35583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:55:34.394398   35583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I1204 23:55:34.394814   35583 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:55:34.395400   35583 main.go:141] libmachine: Using API Version  1
	I1204 23:55:34.395433   35583 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:55:34.395804   35583 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:55:34.395959   35583 main.go:141] libmachine: (functional-524018) Calling .DriverName
	I1204 23:55:34.396192   35583 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:55:34.396493   35583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1204 23:55:34.396541   35583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1204 23:55:34.411151   35583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43295
	I1204 23:55:34.411631   35583 main.go:141] libmachine: () Calling .GetVersion
	I1204 23:55:34.412182   35583 main.go:141] libmachine: Using API Version  1
	I1204 23:55:34.412203   35583 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 23:55:34.412485   35583 main.go:141] libmachine: () Calling .GetMachineName
	I1204 23:55:34.412658   35583 main.go:141] libmachine: (functional-524018) Calling .DriverName
	I1204 23:55:34.444533   35583 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1204 23:55:34.445833   35583 start.go:297] selected driver: kvm2
	I1204 23:55:34.445844   35583 start.go:901] validating driver "kvm2" against &{Name:functional-524018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-524018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:55:34.445960   35583 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:55:34.447862   35583 out.go:201] 
	W1204 23:55:34.448914   35583 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1204 23:55:34.450026   35583 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-524018 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-524018 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bqfsk" [a446398e-b901-48a6-a639-5a4306e49aa2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bqfsk" [a446398e-b901-48a6-a639-5a4306e49aa2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.003269251s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.51:30680
functional_test.go:1675: http://192.168.39.51:30680: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-bqfsk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.51:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.51:30680
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [188d9ac4-86cb-4dde-825c-03cd4037cec5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003459007s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-524018 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-524018 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-524018 get pvc myclaim -o=json
I1204 23:55:13.278375   21385 retry.go:31] will retry after 2.292011019s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:9979b967-21df-423e-b4f7-73af059d3773 ResourceVersion:650 Generation:0 CreationTimestamp:2024-12-04 23:55:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001cdc830 VolumeMode:0xc001cdc840 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-524018 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-524018 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8521d7a6-ed56-4457-97f6-4db8a2366566] Pending
helpers_test.go:344: "sp-pod" [8521d7a6-ed56-4457-97f6-4db8a2366566] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8521d7a6-ed56-4457-97f6-4db8a2366566] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003123661s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-524018 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-524018 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-524018 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bdae33fd-b643-4f51-9b15-ba8033a47bf6] Pending
helpers_test.go:344: "sp-pod" [bdae33fd-b643-4f51-9b15-ba8033a47bf6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bdae33fd-b643-4f51-9b15-ba8033a47bf6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003828604s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-524018 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh -n functional-524018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cp functional-524018:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2365948762/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh -n functional-524018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh -n functional-524018 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-524018 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-kf5m7" [9dbb38d2-22cb-42d1-80b4-7eed159536ad] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-kf5m7" [9dbb38d2-22cb-42d1-80b4-7eed159536ad] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.008913827s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-524018 exec mysql-6cdb49bbb-kf5m7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-524018 exec mysql-6cdb49bbb-kf5m7 -- mysql -ppassword -e "show databases;": exit status 1 (383.000986ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 23:55:27.158449   21385 retry.go:31] will retry after 992.753855ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-524018 exec mysql-6cdb49bbb-kf5m7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-524018 exec mysql-6cdb49bbb-kf5m7 -- mysql -ppassword -e "show databases;": exit status 1 (248.039577ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 23:55:28.399745   21385 retry.go:31] will retry after 1.336229743s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-524018 exec mysql-6cdb49bbb-kf5m7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-524018 exec mysql-6cdb49bbb-kf5m7 -- mysql -ppassword -e "show databases;": exit status 1 (280.186485ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 23:55:30.017371   21385 retry.go:31] will retry after 2.850958846s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-524018 exec mysql-6cdb49bbb-kf5m7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/21385/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo cat /etc/test/nested/copy/21385/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/21385.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo cat /etc/ssl/certs/21385.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/21385.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo cat /usr/share/ca-certificates/21385.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/213852.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo cat /etc/ssl/certs/213852.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/213852.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo cat /usr/share/ca-certificates/213852.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-524018 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 ssh "sudo systemctl is-active docker": exit status 1 (254.314493ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 ssh "sudo systemctl is-active containerd": exit status 1 (216.100097ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524018 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-524018
localhost/kicbase/echo-server:functional-524018
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524018 image ls --format short --alsologtostderr:
I1204 23:55:43.644567   36067 out.go:345] Setting OutFile to fd 1 ...
I1204 23:55:43.644659   36067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:43.644663   36067 out.go:358] Setting ErrFile to fd 2...
I1204 23:55:43.644667   36067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:43.644822   36067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
I1204 23:55:43.645355   36067 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:43.645447   36067 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:43.645793   36067 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:43.645830   36067 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:43.660430   36067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
I1204 23:55:43.660979   36067 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:43.661727   36067 main.go:141] libmachine: Using API Version  1
I1204 23:55:43.661749   36067 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:43.662159   36067 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:43.662362   36067 main.go:141] libmachine: (functional-524018) Calling .GetState
I1204 23:55:43.664103   36067 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:43.664140   36067 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:43.679056   36067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41087
I1204 23:55:43.679468   36067 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:43.679918   36067 main.go:141] libmachine: Using API Version  1
I1204 23:55:43.679937   36067 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:43.680321   36067 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:43.680582   36067 main.go:141] libmachine: (functional-524018) Calling .DriverName
I1204 23:55:43.680786   36067 ssh_runner.go:195] Run: systemctl --version
I1204 23:55:43.680807   36067 main.go:141] libmachine: (functional-524018) Calling .GetSSHHostname
I1204 23:55:43.683810   36067 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:43.684210   36067 main.go:141] libmachine: (functional-524018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:61:40", ip: ""} in network mk-functional-524018: {Iface:virbr1 ExpiryTime:2024-12-05 00:53:00 +0000 UTC Type:0 Mac:52:54:00:54:61:40 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-524018 Clientid:01:52:54:00:54:61:40}
I1204 23:55:43.684238   36067 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined IP address 192.168.39.51 and MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:43.684391   36067 main.go:141] libmachine: (functional-524018) Calling .GetSSHPort
I1204 23:55:43.684578   36067 main.go:141] libmachine: (functional-524018) Calling .GetSSHKeyPath
I1204 23:55:43.684723   36067 main.go:141] libmachine: (functional-524018) Calling .GetSSHUsername
I1204 23:55:43.684868   36067 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/functional-524018/id_rsa Username:docker}
I1204 23:55:43.791204   36067 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 23:55:43.850452   36067 main.go:141] libmachine: Making call to close driver server
I1204 23:55:43.850465   36067 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:43.850768   36067 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:43.850781   36067 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 23:55:43.850769   36067 main.go:141] libmachine: (functional-524018) DBG | Closing plugin on server side
I1204 23:55:43.850792   36067 main.go:141] libmachine: Making call to close driver server
I1204 23:55:43.850800   36067 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:43.851054   36067 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:43.851071   36067 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 23:55:43.851079   36067 main.go:141] libmachine: (functional-524018) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524018 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/kicbase/echo-server           | functional-524018  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-524018  | 3e1053346416d | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524018 image ls --format table --alsologtostderr:
I1204 23:55:44.758769   36223 out.go:345] Setting OutFile to fd 1 ...
I1204 23:55:44.758893   36223 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:44.758902   36223 out.go:358] Setting ErrFile to fd 2...
I1204 23:55:44.758906   36223 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:44.759098   36223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
I1204 23:55:44.759680   36223 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:44.759791   36223 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:44.760174   36223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:44.760221   36223 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:44.774735   36223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34251
I1204 23:55:44.775222   36223 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:44.775763   36223 main.go:141] libmachine: Using API Version  1
I1204 23:55:44.775788   36223 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:44.776091   36223 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:44.776258   36223 main.go:141] libmachine: (functional-524018) Calling .GetState
I1204 23:55:44.778306   36223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:44.778359   36223 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:44.792630   36223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
I1204 23:55:44.792966   36223 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:44.793503   36223 main.go:141] libmachine: Using API Version  1
I1204 23:55:44.793553   36223 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:44.793852   36223 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:44.794014   36223 main.go:141] libmachine: (functional-524018) Calling .DriverName
I1204 23:55:44.794205   36223 ssh_runner.go:195] Run: systemctl --version
I1204 23:55:44.794226   36223 main.go:141] libmachine: (functional-524018) Calling .GetSSHHostname
I1204 23:55:44.796681   36223 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:44.796985   36223 main.go:141] libmachine: (functional-524018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:61:40", ip: ""} in network mk-functional-524018: {Iface:virbr1 ExpiryTime:2024-12-05 00:53:00 +0000 UTC Type:0 Mac:52:54:00:54:61:40 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-524018 Clientid:01:52:54:00:54:61:40}
I1204 23:55:44.797018   36223 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined IP address 192.168.39.51 and MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:44.797165   36223 main.go:141] libmachine: (functional-524018) Calling .GetSSHPort
I1204 23:55:44.797302   36223 main.go:141] libmachine: (functional-524018) Calling .GetSSHKeyPath
I1204 23:55:44.797452   36223 main.go:141] libmachine: (functional-524018) Calling .GetSSHUsername
I1204 23:55:44.797571   36223 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/functional-524018/id_rsa Username:docker}
I1204 23:55:44.886576   36223 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 23:55:44.925139   36223 main.go:141] libmachine: Making call to close driver server
I1204 23:55:44.925160   36223 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:44.925417   36223 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:44.925438   36223 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 23:55:44.925454   36223 main.go:141] libmachine: Making call to close driver server
I1204 23:55:44.925462   36223 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:44.925465   36223 main.go:141] libmachine: (functional-524018) DBG | Closing plugin on server side
I1204 23:55:44.925676   36223 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:44.925691   36223 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524018 image ls --format json --alsologtostderr:
[{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d35
74949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-524018"],"size":"4943877"},{"id":"3e1053346416df68c5d449903ccebf071a02d7aeb7593415add8b3c8ba79bfb8","repoDigests":["localhost/minikube-local-cache-test@sha256:875ea6306fd4f7b31a49c93e9d6f0aa0f336df1758d0acfc0da7fa608b8d8de8"],"repoTags":["localhost/minikube-local-cache-test:functional-524018"],"size":"3328"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e835705
1647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sh
a256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f
33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56
ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a
4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524018 image ls --format json --alsologtostderr:
I1204 23:55:44.544204   36200 out.go:345] Setting OutFile to fd 1 ...
I1204 23:55:44.544351   36200 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:44.544361   36200 out.go:358] Setting ErrFile to fd 2...
I1204 23:55:44.544367   36200 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:44.544543   36200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
I1204 23:55:44.545113   36200 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:44.545240   36200 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:44.545629   36200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:44.545677   36200 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:44.560681   36200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35057
I1204 23:55:44.561155   36200 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:44.561760   36200 main.go:141] libmachine: Using API Version  1
I1204 23:55:44.561789   36200 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:44.562143   36200 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:44.562361   36200 main.go:141] libmachine: (functional-524018) Calling .GetState
I1204 23:55:44.564192   36200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:44.564238   36200 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:44.578703   36200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
I1204 23:55:44.579115   36200 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:44.579532   36200 main.go:141] libmachine: Using API Version  1
I1204 23:55:44.579555   36200 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:44.579812   36200 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:44.579988   36200 main.go:141] libmachine: (functional-524018) Calling .DriverName
I1204 23:55:44.580190   36200 ssh_runner.go:195] Run: systemctl --version
I1204 23:55:44.580212   36200 main.go:141] libmachine: (functional-524018) Calling .GetSSHHostname
I1204 23:55:44.582794   36200 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:44.583170   36200 main.go:141] libmachine: (functional-524018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:61:40", ip: ""} in network mk-functional-524018: {Iface:virbr1 ExpiryTime:2024-12-05 00:53:00 +0000 UTC Type:0 Mac:52:54:00:54:61:40 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-524018 Clientid:01:52:54:00:54:61:40}
I1204 23:55:44.583203   36200 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined IP address 192.168.39.51 and MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:44.583322   36200 main.go:141] libmachine: (functional-524018) Calling .GetSSHPort
I1204 23:55:44.583464   36200 main.go:141] libmachine: (functional-524018) Calling .GetSSHKeyPath
I1204 23:55:44.583582   36200 main.go:141] libmachine: (functional-524018) Calling .GetSSHUsername
I1204 23:55:44.583730   36200 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/functional-524018/id_rsa Username:docker}
I1204 23:55:44.668831   36200 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 23:55:44.710403   36200 main.go:141] libmachine: Making call to close driver server
I1204 23:55:44.710420   36200 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:44.710691   36200 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:44.710710   36200 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 23:55:44.710726   36200 main.go:141] libmachine: Making call to close driver server
I1204 23:55:44.710734   36200 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:44.710713   36200 main.go:141] libmachine: (functional-524018) DBG | Closing plugin on server side
I1204 23:55:44.710951   36200 main.go:141] libmachine: (functional-524018) DBG | Closing plugin on server side
I1204 23:55:44.710995   36200 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:44.711028   36200 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls --format yaml --alsologtostderr
2024/12/04 23:55:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524018 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-524018
size: "4943877"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 3e1053346416df68c5d449903ccebf071a02d7aeb7593415add8b3c8ba79bfb8
repoDigests:
- localhost/minikube-local-cache-test@sha256:875ea6306fd4f7b31a49c93e9d6f0aa0f336df1758d0acfc0da7fa608b8d8de8
repoTags:
- localhost/minikube-local-cache-test:functional-524018
size: "3328"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524018 image ls --format yaml --alsologtostderr:
I1204 23:55:43.899432   36115 out.go:345] Setting OutFile to fd 1 ...
I1204 23:55:43.899519   36115 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:43.899523   36115 out.go:358] Setting ErrFile to fd 2...
I1204 23:55:43.899527   36115 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:43.899707   36115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
I1204 23:55:43.900418   36115 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:43.900557   36115 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:43.900953   36115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:43.900991   36115 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:43.916139   36115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
I1204 23:55:43.916594   36115 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:43.917280   36115 main.go:141] libmachine: Using API Version  1
I1204 23:55:43.917304   36115 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:43.917655   36115 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:43.917837   36115 main.go:141] libmachine: (functional-524018) Calling .GetState
I1204 23:55:43.919777   36115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:43.919815   36115 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:43.934280   36115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42869
I1204 23:55:43.934679   36115 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:43.935232   36115 main.go:141] libmachine: Using API Version  1
I1204 23:55:43.935262   36115 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:43.935559   36115 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:43.935730   36115 main.go:141] libmachine: (functional-524018) Calling .DriverName
I1204 23:55:43.935925   36115 ssh_runner.go:195] Run: systemctl --version
I1204 23:55:43.935946   36115 main.go:141] libmachine: (functional-524018) Calling .GetSSHHostname
I1204 23:55:43.938633   36115 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:43.939064   36115 main.go:141] libmachine: (functional-524018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:61:40", ip: ""} in network mk-functional-524018: {Iface:virbr1 ExpiryTime:2024-12-05 00:53:00 +0000 UTC Type:0 Mac:52:54:00:54:61:40 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-524018 Clientid:01:52:54:00:54:61:40}
I1204 23:55:43.939092   36115 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined IP address 192.168.39.51 and MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:43.939186   36115 main.go:141] libmachine: (functional-524018) Calling .GetSSHPort
I1204 23:55:43.939358   36115 main.go:141] libmachine: (functional-524018) Calling .GetSSHKeyPath
I1204 23:55:43.939490   36115 main.go:141] libmachine: (functional-524018) Calling .GetSSHUsername
I1204 23:55:43.939653   36115 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/functional-524018/id_rsa Username:docker}
I1204 23:55:44.026026   36115 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 23:55:44.064795   36115 main.go:141] libmachine: Making call to close driver server
I1204 23:55:44.064806   36115 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:44.065056   36115 main.go:141] libmachine: (functional-524018) DBG | Closing plugin on server side
I1204 23:55:44.065090   36115 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:44.065109   36115 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 23:55:44.065122   36115 main.go:141] libmachine: Making call to close driver server
I1204 23:55:44.065133   36115 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:44.065360   36115 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:44.065376   36115 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 ssh pgrep buildkitd: exit status 1 (192.793796ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image build -t localhost/my-image:functional-524018 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 image build -t localhost/my-image:functional-524018 testdata/build --alsologtostderr: (3.311560236s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-524018 image build -t localhost/my-image:functional-524018 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fbe343b76aa
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-524018
--> 0dacbbb067e
Successfully tagged localhost/my-image:functional-524018
0dacbbb067ed06f74e37a105a6a55482ab97b911c2091f4f44c9f39788c8434c
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-524018 image build -t localhost/my-image:functional-524018 testdata/build --alsologtostderr:
I1204 23:55:44.306758   36176 out.go:345] Setting OutFile to fd 1 ...
I1204 23:55:44.306895   36176 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:44.306904   36176 out.go:358] Setting ErrFile to fd 2...
I1204 23:55:44.306908   36176 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:55:44.307061   36176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
I1204 23:55:44.307595   36176 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:44.308125   36176 config.go:182] Loaded profile config "functional-524018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:55:44.308461   36176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:44.308500   36176 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:44.323314   36176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
I1204 23:55:44.323785   36176 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:44.324355   36176 main.go:141] libmachine: Using API Version  1
I1204 23:55:44.324375   36176 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:44.324715   36176 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:44.324897   36176 main.go:141] libmachine: (functional-524018) Calling .GetState
I1204 23:55:44.326696   36176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1204 23:55:44.326732   36176 main.go:141] libmachine: Launching plugin server for driver kvm2
I1204 23:55:44.341606   36176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
I1204 23:55:44.342045   36176 main.go:141] libmachine: () Calling .GetVersion
I1204 23:55:44.342483   36176 main.go:141] libmachine: Using API Version  1
I1204 23:55:44.342506   36176 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 23:55:44.342813   36176 main.go:141] libmachine: () Calling .GetMachineName
I1204 23:55:44.343002   36176 main.go:141] libmachine: (functional-524018) Calling .DriverName
I1204 23:55:44.343242   36176 ssh_runner.go:195] Run: systemctl --version
I1204 23:55:44.343274   36176 main.go:141] libmachine: (functional-524018) Calling .GetSSHHostname
I1204 23:55:44.346143   36176 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:44.346542   36176 main.go:141] libmachine: (functional-524018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:61:40", ip: ""} in network mk-functional-524018: {Iface:virbr1 ExpiryTime:2024-12-05 00:53:00 +0000 UTC Type:0 Mac:52:54:00:54:61:40 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-524018 Clientid:01:52:54:00:54:61:40}
I1204 23:55:44.346580   36176 main.go:141] libmachine: (functional-524018) DBG | domain functional-524018 has defined IP address 192.168.39.51 and MAC address 52:54:00:54:61:40 in network mk-functional-524018
I1204 23:55:44.346693   36176 main.go:141] libmachine: (functional-524018) Calling .GetSSHPort
I1204 23:55:44.346859   36176 main.go:141] libmachine: (functional-524018) Calling .GetSSHKeyPath
I1204 23:55:44.346975   36176 main.go:141] libmachine: (functional-524018) Calling .GetSSHUsername
I1204 23:55:44.347120   36176 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/functional-524018/id_rsa Username:docker}
I1204 23:55:44.428093   36176 build_images.go:161] Building image from path: /tmp/build.1175005326.tar
I1204 23:55:44.428162   36176 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1204 23:55:44.437772   36176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1175005326.tar
I1204 23:55:44.441897   36176 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1175005326.tar: stat -c "%s %y" /var/lib/minikube/build/build.1175005326.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1175005326.tar': No such file or directory
I1204 23:55:44.441927   36176 ssh_runner.go:362] scp /tmp/build.1175005326.tar --> /var/lib/minikube/build/build.1175005326.tar (3072 bytes)
I1204 23:55:44.468727   36176 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1175005326
I1204 23:55:44.477546   36176 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1175005326 -xf /var/lib/minikube/build/build.1175005326.tar
I1204 23:55:44.487137   36176 crio.go:315] Building image: /var/lib/minikube/build/build.1175005326
I1204 23:55:44.487196   36176 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-524018 /var/lib/minikube/build/build.1175005326 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1204 23:55:47.547819   36176 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-524018 /var/lib/minikube/build/build.1175005326 --cgroup-manager=cgroupfs: (3.06059536s)
I1204 23:55:47.547879   36176 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1175005326
I1204 23:55:47.558083   36176 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1175005326.tar
I1204 23:55:47.567790   36176 build_images.go:217] Built localhost/my-image:functional-524018 from /tmp/build.1175005326.tar
I1204 23:55:47.567822   36176 build_images.go:133] succeeded building to: functional-524018
I1204 23:55:47.567827   36176 build_images.go:134] failed building to: 
I1204 23:55:47.567850   36176 main.go:141] libmachine: Making call to close driver server
I1204 23:55:47.567860   36176 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:47.568121   36176 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:47.568161   36176 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 23:55:47.568178   36176 main.go:141] libmachine: Making call to close driver server
I1204 23:55:47.568185   36176 main.go:141] libmachine: (functional-524018) Calling .Close
I1204 23:55:47.568407   36176 main.go:141] libmachine: Successfully made call to close driver server
I1204 23:55:47.568425   36176 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.708009536s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-524018
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image load --daemon kicbase/echo-server:functional-524018 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 image load --daemon kicbase/echo-server:functional-524018 --alsologtostderr: (1.142585103s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image load --daemon kicbase/echo-server:functional-524018 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdany-port2735879318/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733356510079209712" to /tmp/TestFunctionalparallelMountCmdany-port2735879318/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733356510079209712" to /tmp/TestFunctionalparallelMountCmdany-port2735879318/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733356510079209712" to /tmp/TestFunctionalparallelMountCmdany-port2735879318/001/test-1733356510079209712
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.541844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 23:55:10.310097   21385 retry.go:31] will retry after 724.734741ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  4 23:55 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  4 23:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  4 23:55 test-1733356510079209712
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh cat /mount-9p/test-1733356510079209712
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-524018 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4dac8202-0b63-4d02-a757-3b29c9910c04] Pending
helpers_test.go:344: "busybox-mount" [4dac8202-0b63-4d02-a757-3b29c9910c04] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4dac8202-0b63-4d02-a757-3b29c9910c04] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4dac8202-0b63-4d02-a757-3b29c9910c04] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.004365597s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-524018 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdany-port2735879318/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-524018
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image load --daemon kicbase/echo-server:functional-524018 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image save kicbase/echo-server:functional-524018 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 image save kicbase/echo-server:functional-524018 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.751998768s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image rm kicbase/echo-server:functional-524018 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-524018
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 image save --daemon kicbase/echo-server:functional-524018 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-524018
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdspecific-port3168931386/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.060889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 23:55:27.353619   21385 retry.go:31] will retry after 725.31518ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdspecific-port3168931386/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 ssh "sudo umount -f /mount-9p": exit status 1 (229.625774ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-524018 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdspecific-port3168931386/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2223071483/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2223071483/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2223071483/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T" /mount1: exit status 1 (256.261437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 23:55:29.479940   21385 retry.go:31] will retry after 688.683054ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-524018 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2223071483/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2223071483/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-524018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2223071483/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-524018 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-524018 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-wzvpv" [d1fbf094-fc13-4a10-8303-2f536a4d9abf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-wzvpv" [d1fbf094-fc13-4a10-8303-2f536a4d9abf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003403367s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "295.695547ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "55.205329ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "328.325688ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "60.530935ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 service list: (1.313877559s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-524018 service list -o json: (1.254724968s)
functional_test.go:1494: Took "1.254817742s" to run "out/minikube-linux-amd64 -p functional-524018 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.51:31888
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-524018 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.51:31888
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-524018
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-524018
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-524018
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-168758 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-168758 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.121920664s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (197.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-168758 -- rollout status deployment/busybox: (4.584938207s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-brzwz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-qnw8r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-vzls9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-brzwz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-qnw8r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-vzls9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-brzwz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-qnw8r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-vzls9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-brzwz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-brzwz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-qnw8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-qnw8r -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-vzls9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168758 -- exec busybox-7dff88458-vzls9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-168758 -v=7 --alsologtostderr
E1205 00:00:07.014314   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:07.020729   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:07.032119   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:07.053542   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:07.094936   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:07.176712   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:07.338261   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:07.660163   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:08.302472   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:09.584491   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:00:12.146722   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-168758 -v=7 --alsologtostderr: (52.356257101s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-168758 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp testdata/cp-test.txt ha-168758:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758:/home/docker/cp-test.txt ha-168758-m02:/home/docker/cp-test_ha-168758_ha-168758-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m02 "sudo cat /home/docker/cp-test_ha-168758_ha-168758-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758:/home/docker/cp-test.txt ha-168758-m03:/home/docker/cp-test_ha-168758_ha-168758-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758 "sudo cat /home/docker/cp-test.txt"
E1205 00:00:17.268774   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m03 "sudo cat /home/docker/cp-test_ha-168758_ha-168758-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758:/home/docker/cp-test.txt ha-168758-m04:/home/docker/cp-test_ha-168758_ha-168758-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m04 "sudo cat /home/docker/cp-test_ha-168758_ha-168758-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp testdata/cp-test.txt ha-168758-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m02:/home/docker/cp-test.txt ha-168758:/home/docker/cp-test_ha-168758-m02_ha-168758.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758 "sudo cat /home/docker/cp-test_ha-168758-m02_ha-168758.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m02:/home/docker/cp-test.txt ha-168758-m03:/home/docker/cp-test_ha-168758-m02_ha-168758-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m03 "sudo cat /home/docker/cp-test_ha-168758-m02_ha-168758-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m02:/home/docker/cp-test.txt ha-168758-m04:/home/docker/cp-test_ha-168758-m02_ha-168758-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m04 "sudo cat /home/docker/cp-test_ha-168758-m02_ha-168758-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp testdata/cp-test.txt ha-168758-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt ha-168758:/home/docker/cp-test_ha-168758-m03_ha-168758.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758 "sudo cat /home/docker/cp-test_ha-168758-m03_ha-168758.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt ha-168758-m02:/home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m02 "sudo cat /home/docker/cp-test_ha-168758-m03_ha-168758-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m03:/home/docker/cp-test.txt ha-168758-m04:/home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m04 "sudo cat /home/docker/cp-test_ha-168758-m03_ha-168758-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp testdata/cp-test.txt ha-168758-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2840799661/001/cp-test_ha-168758-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt ha-168758:/home/docker/cp-test_ha-168758-m04_ha-168758.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758 "sudo cat /home/docker/cp-test_ha-168758-m04_ha-168758.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt ha-168758-m02:/home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m02 "sudo cat /home/docker/cp-test_ha-168758-m04_ha-168758-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 cp ha-168758-m04:/home/docker/cp-test.txt ha-168758-m03:/home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 ssh -n ha-168758-m03 "sudo cat /home/docker/cp-test_ha-168758-m04_ha-168758-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-168758 node delete m03 -v=7 --alsologtostderr: (15.919054779s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (354.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-168758 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 00:15:07.018168   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:16:30.083136   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-168758 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.299212022s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (354.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-168758 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-168758 --control-plane -v=7 --alsologtostderr: (1m16.30890231s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-168758 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-955672 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-955672 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.361549308s)
--- PASS: TestJSONOutput/start/Command (53.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-955672 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-955672 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-955672 --output=json --user=testUser
E1205 00:20:07.015937   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-955672 --output=json --user=testUser: (6.621987682s)
--- PASS: TestJSONOutput/stop/Command (6.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-663950 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-663950 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.300738ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aea2fbe9-a5bf-4ddd-b0b8-12e526994534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-663950] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0aca1f8c-5ebc-47e3-806e-1790a7a470b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"bc3082b6-5fe7-4574-b611-d8902b6bf718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"73386bc6-e4ca-4468-a916-15567dda13cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig"}}
	{"specversion":"1.0","id":"1b1790cc-d980-4eab-ad60-1d03d480315d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube"}}
	{"specversion":"1.0","id":"bd606606-d803-4029-ad8b-5da0fe77303b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bd085f20-d27f-40eb-88bc-078768583343","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"999e1099-bb7a-4254-ae28-1fdb416e2f31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-663950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-663950
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-501296 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-501296 --driver=kvm2  --container-runtime=crio: (38.901912808s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-513752 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-513752 --driver=kvm2  --container-runtime=crio: (42.740808961s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-501296
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-513752
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-513752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-513752
helpers_test.go:175: Cleaning up "first-501296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-501296
--- PASS: TestMinikubeProfile (84.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-278645 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-278645 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.043598025s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-278645 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-278645 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-295160 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-295160 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.492764911s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295160 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295160 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-278645 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295160 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295160 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-295160
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-295160: (1.274304653s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-295160
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-295160: (21.58773597s)
--- PASS: TestMountStart/serial/RestartStopped (22.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295160 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-295160 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-591074 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-591074 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.753227076s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-591074 -- rollout status deployment/busybox: (5.692864421s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-2lbns -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-bprkd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-2lbns -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-bprkd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-2lbns -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-bprkd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-2lbns -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-2lbns -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-bprkd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-591074 -- exec busybox-7dff88458-bprkd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-591074 -v 3 --alsologtostderr
E1205 00:25:07.014151   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-591074 -v 3 --alsologtostderr: (50.793468244s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.34s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-591074 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp testdata/cp-test.txt multinode-591074:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile49849138/001/cp-test_multinode-591074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074:/home/docker/cp-test.txt multinode-591074-m02:/home/docker/cp-test_multinode-591074_multinode-591074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m02 "sudo cat /home/docker/cp-test_multinode-591074_multinode-591074-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074:/home/docker/cp-test.txt multinode-591074-m03:/home/docker/cp-test_multinode-591074_multinode-591074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m03 "sudo cat /home/docker/cp-test_multinode-591074_multinode-591074-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp testdata/cp-test.txt multinode-591074-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile49849138/001/cp-test_multinode-591074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074-m02:/home/docker/cp-test.txt multinode-591074:/home/docker/cp-test_multinode-591074-m02_multinode-591074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074 "sudo cat /home/docker/cp-test_multinode-591074-m02_multinode-591074.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074-m02:/home/docker/cp-test.txt multinode-591074-m03:/home/docker/cp-test_multinode-591074-m02_multinode-591074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m03 "sudo cat /home/docker/cp-test_multinode-591074-m02_multinode-591074-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp testdata/cp-test.txt multinode-591074-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile49849138/001/cp-test_multinode-591074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt multinode-591074:/home/docker/cp-test_multinode-591074-m03_multinode-591074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074 "sudo cat /home/docker/cp-test_multinode-591074-m03_multinode-591074.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 cp multinode-591074-m03:/home/docker/cp-test.txt multinode-591074-m02:/home/docker/cp-test_multinode-591074-m03_multinode-591074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 ssh -n multinode-591074-m02 "sudo cat /home/docker/cp-test_multinode-591074-m03_multinode-591074-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-591074 node stop m03: (1.389220599s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-591074 status: exit status 7 (420.743506ms)

                                                
                                                
-- stdout --
	multinode-591074
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-591074-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-591074-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr: exit status 7 (414.233916ms)

                                                
                                                
-- stdout --
	multinode-591074
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-591074-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-591074-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 00:25:58.555105   53565 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:25:58.555240   53565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:25:58.555251   53565 out.go:358] Setting ErrFile to fd 2...
	I1205 00:25:58.555257   53565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:25:58.555421   53565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-14126/.minikube/bin
	I1205 00:25:58.555633   53565 out.go:352] Setting JSON to false
	I1205 00:25:58.555664   53565 mustload.go:65] Loading cluster: multinode-591074
	I1205 00:25:58.555704   53565 notify.go:220] Checking for updates...
	I1205 00:25:58.556077   53565 config.go:182] Loaded profile config "multinode-591074": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:25:58.556097   53565 status.go:174] checking status of multinode-591074 ...
	I1205 00:25:58.556556   53565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:25:58.556621   53565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:25:58.573054   53565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I1205 00:25:58.573576   53565 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:25:58.574105   53565 main.go:141] libmachine: Using API Version  1
	I1205 00:25:58.574124   53565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:25:58.574516   53565 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:25:58.574702   53565 main.go:141] libmachine: (multinode-591074) Calling .GetState
	I1205 00:25:58.576316   53565 status.go:371] multinode-591074 host status = "Running" (err=<nil>)
	I1205 00:25:58.576336   53565 host.go:66] Checking if "multinode-591074" exists ...
	I1205 00:25:58.576771   53565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:25:58.576825   53565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:25:58.591473   53565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46539
	I1205 00:25:58.591889   53565 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:25:58.592354   53565 main.go:141] libmachine: Using API Version  1
	I1205 00:25:58.592380   53565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:25:58.592688   53565 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:25:58.592860   53565 main.go:141] libmachine: (multinode-591074) Calling .GetIP
	I1205 00:25:58.595593   53565 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:25:58.596024   53565 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:25:58.596046   53565 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:25:58.596200   53565 host.go:66] Checking if "multinode-591074" exists ...
	I1205 00:25:58.596465   53565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:25:58.596502   53565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:25:58.611914   53565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
	I1205 00:25:58.612382   53565 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:25:58.612832   53565 main.go:141] libmachine: Using API Version  1
	I1205 00:25:58.612851   53565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:25:58.613133   53565 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:25:58.613317   53565 main.go:141] libmachine: (multinode-591074) Calling .DriverName
	I1205 00:25:58.613481   53565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 00:25:58.613521   53565 main.go:141] libmachine: (multinode-591074) Calling .GetSSHHostname
	I1205 00:25:58.616297   53565 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:25:58.616759   53565 main.go:141] libmachine: (multinode-591074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:91:b1", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:23:13 +0000 UTC Type:0 Mac:52:54:00:fc:91:b1 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-591074 Clientid:01:52:54:00:fc:91:b1}
	I1205 00:25:58.616786   53565 main.go:141] libmachine: (multinode-591074) DBG | domain multinode-591074 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:91:b1 in network mk-multinode-591074
	I1205 00:25:58.616945   53565 main.go:141] libmachine: (multinode-591074) Calling .GetSSHPort
	I1205 00:25:58.617121   53565 main.go:141] libmachine: (multinode-591074) Calling .GetSSHKeyPath
	I1205 00:25:58.617288   53565 main.go:141] libmachine: (multinode-591074) Calling .GetSSHUsername
	I1205 00:25:58.617411   53565 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074/id_rsa Username:docker}
	I1205 00:25:58.692998   53565 ssh_runner.go:195] Run: systemctl --version
	I1205 00:25:58.698647   53565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 00:25:58.712574   53565 kubeconfig.go:125] found "multinode-591074" server: "https://192.168.39.176:8443"
	I1205 00:25:58.712605   53565 api_server.go:166] Checking apiserver status ...
	I1205 00:25:58.712636   53565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 00:25:58.727362   53565 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1055/cgroup
	W1205 00:25:58.738175   53565 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1055/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 00:25:58.738227   53565 ssh_runner.go:195] Run: ls
	I1205 00:25:58.742402   53565 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1205 00:25:58.746605   53565 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1205 00:25:58.746627   53565 status.go:463] multinode-591074 apiserver status = Running (err=<nil>)
	I1205 00:25:58.746636   53565 status.go:176] multinode-591074 status: &{Name:multinode-591074 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 00:25:58.746651   53565 status.go:174] checking status of multinode-591074-m02 ...
	I1205 00:25:58.746941   53565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:25:58.746972   53565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:25:58.761962   53565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42437
	I1205 00:25:58.762427   53565 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:25:58.762900   53565 main.go:141] libmachine: Using API Version  1
	I1205 00:25:58.762920   53565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:25:58.763266   53565 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:25:58.763455   53565 main.go:141] libmachine: (multinode-591074-m02) Calling .GetState
	I1205 00:25:58.764937   53565 status.go:371] multinode-591074-m02 host status = "Running" (err=<nil>)
	I1205 00:25:58.764956   53565 host.go:66] Checking if "multinode-591074-m02" exists ...
	I1205 00:25:58.765234   53565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:25:58.765276   53565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:25:58.780173   53565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33795
	I1205 00:25:58.780624   53565 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:25:58.781063   53565 main.go:141] libmachine: Using API Version  1
	I1205 00:25:58.781087   53565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:25:58.781396   53565 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:25:58.781587   53565 main.go:141] libmachine: (multinode-591074-m02) Calling .GetIP
	I1205 00:25:58.784291   53565 main.go:141] libmachine: (multinode-591074-m02) DBG | domain multinode-591074-m02 has defined MAC address 52:54:00:0d:3d:dd in network mk-multinode-591074
	I1205 00:25:58.784625   53565 main.go:141] libmachine: (multinode-591074-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:3d:dd", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:24:14 +0000 UTC Type:0 Mac:52:54:00:0d:3d:dd Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-591074-m02 Clientid:01:52:54:00:0d:3d:dd}
	I1205 00:25:58.784651   53565 main.go:141] libmachine: (multinode-591074-m02) DBG | domain multinode-591074-m02 has defined IP address 192.168.39.73 and MAC address 52:54:00:0d:3d:dd in network mk-multinode-591074
	I1205 00:25:58.784816   53565 host.go:66] Checking if "multinode-591074-m02" exists ...
	I1205 00:25:58.785116   53565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:25:58.785150   53565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:25:58.800157   53565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I1205 00:25:58.800565   53565 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:25:58.801038   53565 main.go:141] libmachine: Using API Version  1
	I1205 00:25:58.801061   53565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:25:58.801352   53565 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:25:58.801524   53565 main.go:141] libmachine: (multinode-591074-m02) Calling .DriverName
	I1205 00:25:58.801674   53565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 00:25:58.801691   53565 main.go:141] libmachine: (multinode-591074-m02) Calling .GetSSHHostname
	I1205 00:25:58.804667   53565 main.go:141] libmachine: (multinode-591074-m02) DBG | domain multinode-591074-m02 has defined MAC address 52:54:00:0d:3d:dd in network mk-multinode-591074
	I1205 00:25:58.805059   53565 main.go:141] libmachine: (multinode-591074-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:3d:dd", ip: ""} in network mk-multinode-591074: {Iface:virbr1 ExpiryTime:2024-12-05 01:24:14 +0000 UTC Type:0 Mac:52:54:00:0d:3d:dd Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-591074-m02 Clientid:01:52:54:00:0d:3d:dd}
	I1205 00:25:58.805080   53565 main.go:141] libmachine: (multinode-591074-m02) DBG | domain multinode-591074-m02 has defined IP address 192.168.39.73 and MAC address 52:54:00:0d:3d:dd in network mk-multinode-591074
	I1205 00:25:58.805256   53565 main.go:141] libmachine: (multinode-591074-m02) Calling .GetSSHPort
	I1205 00:25:58.805449   53565 main.go:141] libmachine: (multinode-591074-m02) Calling .GetSSHKeyPath
	I1205 00:25:58.805587   53565 main.go:141] libmachine: (multinode-591074-m02) Calling .GetSSHUsername
	I1205 00:25:58.805690   53565 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20045-14126/.minikube/machines/multinode-591074-m02/id_rsa Username:docker}
	I1205 00:25:58.888634   53565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 00:25:58.902890   53565 status.go:176] multinode-591074-m02 status: &{Name:multinode-591074-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 00:25:58.902930   53565 status.go:174] checking status of multinode-591074-m03 ...
	I1205 00:25:58.903383   53565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 00:25:58.903434   53565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 00:25:58.920560   53565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
	I1205 00:25:58.921038   53565 main.go:141] libmachine: () Calling .GetVersion
	I1205 00:25:58.921550   53565 main.go:141] libmachine: Using API Version  1
	I1205 00:25:58.921573   53565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 00:25:58.921897   53565 main.go:141] libmachine: () Calling .GetMachineName
	I1205 00:25:58.922086   53565 main.go:141] libmachine: (multinode-591074-m03) Calling .GetState
	I1205 00:25:58.923643   53565 status.go:371] multinode-591074-m03 host status = "Stopped" (err=<nil>)
	I1205 00:25:58.923658   53565 status.go:384] host is not running, skipping remaining checks
	I1205 00:25:58.923664   53565 status.go:176] multinode-591074-m03 status: &{Name:multinode-591074-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-591074 node start m03 -v=7 --alsologtostderr: (38.623377986s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-591074 node delete m03: (1.749296811s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-591074 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 00:35:07.019788   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-591074 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.71154082s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-591074 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-591074
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-591074-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-591074-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.910765ms)

                                                
                                                
-- stdout --
	* [multinode-591074-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-591074-m02' is duplicated with machine name 'multinode-591074-m02' in profile 'multinode-591074'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-591074-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-591074-m03 --driver=kvm2  --container-runtime=crio: (39.582705406s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-591074
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-591074: exit status 80 (212.951944ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-591074 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-591074-m03 already exists in multinode-591074-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-591074-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.67s)

                                                
                                    
x
+
TestScheduledStopUnix (113.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-185468 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-185468 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.635599938s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-185468 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-185468 -n scheduled-stop-185468
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-185468 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1205 00:41:53.726154   21385 retry.go:31] will retry after 98.299µs: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.727323   21385 retry.go:31] will retry after 157.484µs: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.728480   21385 retry.go:31] will retry after 222.38µs: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.729610   21385 retry.go:31] will retry after 209.487µs: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.730715   21385 retry.go:31] will retry after 417.277µs: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.731882   21385 retry.go:31] will retry after 794.522µs: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.733003   21385 retry.go:31] will retry after 1.358243ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.735180   21385 retry.go:31] will retry after 1.634368ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.737396   21385 retry.go:31] will retry after 3.193602ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.741623   21385 retry.go:31] will retry after 3.866803ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.745883   21385 retry.go:31] will retry after 6.770368ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.753100   21385 retry.go:31] will retry after 5.410147ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.759313   21385 retry.go:31] will retry after 16.921712ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.776546   21385 retry.go:31] will retry after 10.846733ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
I1205 00:41:53.787810   21385 retry.go:31] will retry after 20.609979ms: open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/scheduled-stop-185468/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-185468 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-185468 -n scheduled-stop-185468
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-185468
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-185468 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-185468
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-185468: exit status 7 (65.401372ms)

                                                
                                                
-- stdout --
	scheduled-stop-185468
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-185468 -n scheduled-stop-185468
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-185468 -n scheduled-stop-185468: exit status 7 (64.277571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-185468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-185468
--- PASS: TestScheduledStopUnix (113.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (190.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1482155435 start -p running-upgrade-966977 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1482155435 start -p running-upgrade-966977 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.095566335s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-966977 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-966977 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.637714006s)
helpers_test.go:175: Cleaning up "running-upgrade-966977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-966977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-966977: (1.368349029s)
--- PASS: TestRunningBinaryUpgrade (190.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-864530 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-864530 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.922056ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-864530] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-14126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-14126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-864530 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-864530 --driver=kvm2  --container-runtime=crio: (1m33.766550828s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-864530 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.817494521 start -p stopped-upgrade-853552 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.817494521 start -p stopped-upgrade-853552 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m5.19937124s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.817494521 -p stopped-upgrade-853552 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.817494521 -p stopped-upgrade-853552 stop: (2.130610758s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-853552 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-853552 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.830576871s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (116.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-864530 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1205 00:45:07.014709   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-864530 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.426822658s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-864530 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-864530 status -o json: exit status 2 (265.562439ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-864530","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-864530
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-864530: (1.02041789s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-864530 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-864530 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.275367112s)
--- PASS: TestNoKubernetes/serial/Start (28.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-864530 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-864530 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.995962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (27.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.886633233s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (12.554621628s)
--- PASS: TestNoKubernetes/serial/ProfileList (27.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-864530
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-864530: (1.316012012s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-864530 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-864530 --driver=kvm2  --container-runtime=crio: (22.770863277s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-853552
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-864530 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-864530 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.019667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/Start (129.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-480268 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-480268 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m9.556129296s)
--- PASS: TestPause/serial/Start (129.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-480268 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1205 00:50:07.015049   21385 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-14126/.minikube/profiles/functional-524018/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-480268 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.600041886s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-480268 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-480268 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-480268 --output=json --layout=cluster: exit status 2 (239.568624ms)

                                                
                                                
-- stdout --
	{"Name":"pause-480268","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-480268","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-480268 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-480268 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-480268 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    

Test skip (32/213)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard